The Resource Numerical analysis for statisticians, Kenneth Lange

Numerical analysis for statisticians, Kenneth Lange

Label
Numerical analysis for statisticians
Title
Numerical analysis for statisticians
Statement of responsibility
Kenneth Lange
Creator
Subject
Language
eng
Summary
Every advance in computer architecture and software tempts statisticians to tackle numerically harder problems. To do so intelligently requires a good working knowledge of numerical analysis. This book is intended to equip students to craft their own software and to understand the advantages and disadvantages of different numerical methods. Numerical Analysis for Statisticians can serve as a graduate text for either a one- or a two-semester course surveying computational statistics. With a careful selection of topics and appropriate supplementation, it can even be used at the undergraduate level. Because many of the chapters are nearly self-contained, professional statisticians will also find the book useful as a reference
Cataloging source
DLC
Dewey number
519.4
Index
index present
LC call number
QA297
LC item number
.L34 1998
Literary form
non fiction
Nature of contents
bibliography
Series statement
Statistics and computing
Label
Numerical analysis for statisticians, Kenneth Lange
Publication
Bibliography note
Includes bibliographical references and index
http://library.link/vocab/branchCode
  • net
Contents
  • Contents note continued: 10.4.Application to Nonparametric Regression -- 10.5.Problems -- 10.6.References -- 11.Optimization Theory -- 11.1.Introduction -- 11.2.Unconstrained Optimization -- 11.3.Optimization with Equality Constraints -- 11.4.Optimization with Inequality Constraints -- 11.5.Convexity -- 11.6.Block Relaxation -- 11.7.Problems -- 11.8.References -- 12.The MM Algorithm -- 12.1.Introduction -- 12.2.Philosophy of the MM Algorithm -- 12.3.Majorization and Minorization -- 12.4.Linear Regression -- 12.5.Elliptically Symmetric Densities and lp Regression -- 12.6.Bradley-Terry Model of Ranking -- 12.7.A Random Graph Model -- 12.8.Linear Logistic Regression -- 12.9.Unconstrained Geometric Programming -- 12.10.Poisson Processes -- 12.11.Transmission Tomography -- 12.12.Problems -- 12.13.References -- 13.The EM Algorithm -- 13.1.Introduction -- 13.2.General Definition of the EM Algorithm -- 13.3.Ascent Property of the EM Algorithm -- 13.3.1.Technical Note --
  • Contents note continued: 13.4.Missing Data in the Ordinary Sense -- 13.5.Bayesian EM -- 13.6.Allele Frequency Estimation -- 13.7.Clustering by EM -- 13.8.Transmission Tomography -- 13.9.Factor Analysis -- 13.10.Problems -- 13.11.References -- 14.Newton's Method and Scoring -- 14.1.Introduction -- 14.2.Newton's Method and Root Finding -- 14.3.Newton's Method and Optimization -- 14.4.Ad Hoc Approximations of Hessians -- 14.5.Scoring and Exponential Families -- 14.6.The Gauss-Newton Algorithm -- 14.7.Generalized Linear Models -- 14.8.MM Gradient Algorithm -- 14.9.Quasi-Newton Methods -- 14.10.Accelerated MM -- 14.11.Problems -- 14.12.References -- 15.Local and Global Convergence -- 15.1.Introduction -- 15.2.Calulus Preliminaries -- 15.3.Local Rates of Convergence -- 15.4.Global Convergence of the MM Algorithm -- 15.5.Global Convergence of Block Relaxation -- 15.6.Global convergence of Gradient Algorithms -- 15.7.Problems -- 15.8.References -- 16.Advanced Optimization Topics --
  • Contents note continued: 16.1.Introduction -- 16.2.Barrier and Penalty Methods -- 16.3.Adaptive Barrier Methods -- 16.4.Dykstra's Algorithm -- 16.5.Model Selection and the Lasso -- 16.5.1.Application to l1 Regression -- 16.5.2.Application to l2 Regression -- 16.5.3.Application to Generalized Linear Models -- 16.5.4.Application to Discriminant Analysis -- 16.6.Standard Errors -- 16.6.1.Standard Errors and the MM Algorithm -- 16.6.2.Standard Errors and Linear Constraints -- 16.7.Problems -- 16.8.References -- 17.Concrete Hilbert Spaces -- 17.1.Introduction -- 17.2.Definitions and Basic Properties -- 17.3.Fourier Series -- 17.4.Orthogonal Polynomials -- 17.5.Reproducing Kernel Hilbert Spaces -- 17.6.Application to Spline Estimation -- 17.7.Problems -- 17.8.References -- 18.Quadrature Methods -- 18.1.Introduction -- 18.2.Euler-Maclaurin Sum Formula -- 18.3.Romberg's Algorithm -- 18.4.Adaptive quadrature -- 18.5.Taming Bad Integrands -- 18.6.Gaussian Quadrature -- 18.7.Problems --
  • Contents note continued: 18.8.References -- 19.The Fourier Transform -- 19.1.Introduction -- 19.2.Basic properties -- 19.3.Examples -- 19.4.Further Theory -- 19.5.Edgeworth Expansions -- 19.6.Problems -- 19.7.References -- 20.The Finite Fourier Transform -- 20.1.Introduction -- 20.2.Basic Properties -- 20.3.Derivation of the Fast Fourier Transform -- 20.4.Approximation of Fourier Series Coefficients -- 20.5.Convolution -- 20.6.Time Series -- 20.7.Problems -- 20.8.References -- 21.Wavelets -- 21.1.Introduction -- 21.2.Haar's Wavelets -- 21.3.Histogram Estimators -- 21.4.Daubechies' Wavelets -- 21.5.Multiresolution Analysis -- 21.6.Image Compression and the Fast Wavelet Transform -- 21.7.Problems -- 21.8.References -- 22.Generating Random Deviates -- 22.1.Introduction -- 22.2.Portable Random Number Generators -- 22.3.The Inverse Method -- 22.4.Normal Random Deviates -- 22.5.Acceptance-Rejection Method -- 22.6.Adaptive Acceptance-Rejection Sampling -- 22.7.Ratio Method --
  • Contents note continued: 22.8.Deviates by Definition -- 22.9.Multivariate Deviates -- 22.10.Sequential Sampling -- 22.11.Problems -- 22.12.References -- 23.Independent Monte Carlo -- 23.1.Introduction -- 23.2.Importance Sampling -- 23.3.Stratified Sampling -- 23.4.Antithetic Variates -- 23.5.Control Variates -- 23.6.Rao-Blackwellization -- 23.7.Sequential Importance Sampling -- 23.8.Problems -- 23.9.References -- 24.Permutation Tests and the Bootstrap -- 24.1.Introduction -- 24.2.Permutation Tests -- 24.3.The Bootstrap -- 24.3.1.Range of Applications -- 24.3.2.Estimation of Standard Errors -- 24.3.3.Bias Reduction -- 24.3.4.Confidence Intervals -- 24.3.5.Applications in Regression -- 24.4.Efficient Bootstrap simulations -- 24.4.1.The Balanced Bootstrap -- 24.4.2.The Antithetic Bootstrap -- 24.4.3.Importance Resampling -- 24.5.Problems -- 24.6.References -- 25.Finite-State Markov Chains -- 25.1.Introduction -- 25.2.Discrete-Time Markov Chains -- 25.3.Hidden Markov Chains --
  • Contents note continued: 25.4.Connections to the EM Algorithm -- 25.5.Continuous-Time Markov Chains -- 25.6.Calculation of Matrix Exponentials -- 25.7.Calculation of the Equlibrium Distribution -- 25.8.Stochastic Simulation and Intensity Leaping -- 25.9.Problems -- 25.10.References -- 26.Markov Chain Monte Carlo -- 26.1.Introduction -- 26.2.The Hastings-Metropolis Algorithm -- 26.3.Gibbs Sampling -- 26.4.Other Examplez of Hastings-Metropolis Sampling -- 26.5.Some Practical Advice -- 26.6.Simulated Annealing -- 26.7.Problems -- 26.8.References -- 27.Advanced Topics in MCMC -- 27.1.Introduction -- 27.2.Markov Random Fields -- 27.3.Reversible Jump MCMC -- 27.4.Metrics for Convergence -- 27.5.Convergence Rates for Finite Chains -- 27.6.Convergence of the Independence Sampler -- 27.7.Operators and Markov Chains -- 27.8.Compact Opeators and Gibbs Sampling -- 27.9.Convergence Rates for Gibbs Sampling -- 27.10.Problems -- 27.11.References --
  • Contents note continued: 3.Continued Fraction Expansions -- 3.1.Introduction -- 3.2.Wallis's Algorithm -- 3.3.Equivalence Transformations -- 3.4.Gauss's Expansion of Hypergeometric Functions -- 3.5.Expansion of the Incomplete Gamma Function -- 3.6.Problems -- 3.7.References -- 4.Asymptotic Expansions -- 4.1.Introduction -- 4.2.Order Relations -- 4.3.Finite Taylor Expansions -- 4.4.Expansions via Integration by Parts -- 4.4.1.Exponential Integral -- 4.4.2.Incomplete Gamma Function -- 4.4.3.Laplace Transforms -- 4.5.General Definition of an Asymptotic Expansion -- 4.6.Laplace's Method -- 4.6.1.Moments of an Order Statistic -- 4.6.2.Striling's Formula -- 4.6.3.Posterior Expectations -- 4.7.Validation of Laplace's Method -- 4.8.Problems -- 4.9.References -- 5.Solution of Nonlinear Equations -- 5.1.Introduction -- 5.2.Bisection -- 5.2.1.Computation of Quantiles by Bisection -- 5.2.2.Shortest Confidence Interval -- 5.3.Functional Iteration -- 5.3.1.Fractional Linear Transformations --
  • Contents note continued: 5.3.2.Extinction Probabilities by Functional Iteration -- 5.4.Newton's Method -- 5.4.1.Division without Dividing -- 5.4.2.Extinction Probabilities by Newton's Method -- 5.5.Golden Section Search -- 5.6.Minimization by Cubic Interpolation -- 5.7.Stopping Criteria -- 5.8.Problems -- 5.9.References -- 6.Vector and Matrix Norms -- 6.1.Introduction -- 6.2.Elementary Properties of Vector Norms -- 6.3.Elementary Properties of Matrix Norms -- 6.4.Norm Preserving Linear Transformations -- 6.5.Iterative Solution of Linear Equations -- 6.5.1.Jacobi's Method -- 6.5.2.Landweber's Iteration Scheme -- 6.5.3.Equilibrium Distribution of a Markov Chain -- 6.6.Condition Number of a Matrix -- 6.7.Problems -- 6.8.References -- 7.Linear Regression and Matrix Inversiona -- 7.1.Introduction -- 7.2.Motivation from Linear Regression -- 7.3.Motivation of the Sweep Operator -- 7.4.Definition of the Sweep Operator -- 7.5.Properties of the Sweep Operator --
  • Contents note continued: 7.6.Applications of Sweeping -- 7.7.Cholesky Decompositions -- 7.8.Gram-Schmidt Orthogonalization -- 7.9.Orthogonalization by Householder Reflections -- 7.10.Comparison of the Different Algorithms -- 7.11.Woodbury's Formula -- 7.12.Problems -- 7.13.References -- 8.Eigenvalues and Eigenvectors -- 8.1.Introduction -- 8.2.Jacobi's Method -- 8.3.The Rayleigh quotient -- 8.4.Finding a Single Eigenvalue -- 8.5.Problems -- 8.6.References -- 9.Singular Value Decomposition -- 9.1.Introduction -- 9.2.Basic Properties of The SVD -- 9.3.Applications -- 9.3.1.Reduced Rank Regression -- 9.3.2.Ridge Regression -- 9.3.3.Polar Decomposition -- 9.3.4.Image Compression -- 9.3.5.Principla Components -- 9.3.6.Total Least Squares -- 9.4.Jacobi's Algorithm for the SVD -- 9.5.Problems -- 9.6.References -- 10.Splines -- 10.1.Introduction -- 10.2.Defnition and Basic Properties -- 10.3.Applications to Differentiation and Integration --
  • Contents note continued: Appendix The Multivariate Normal Distribution -- A.1.References
  • Machine generated contents note: 1.Recurrence Relations -- 1.1.Introduction -- 1.2.Binomial Coefficients -- 1.3.Number of Partitions of a Set -- 1.4.Horner's Method -- 1.5.Sample Means and Variances -- 1.6.Expected Family Size -- 1.7.Poisson-Binomial Distribution -- 1.8.A Multinomial Test Statistic -- 1.9.An Unstable Recurrence -- 1.10.Quick Sort -- 1.11.Problems -- 1.12.References -- 2.Power Series Expansions -- 2.1.Introduction -- 2.2.Expansion of P (S) -- 2.2.1.Application to Moments -- 2.3.Expansion of eP (s) -- 2.3.1.Moments to Cumulants and Vice Versa -- 2.3.2.Compound Poisson Distributions -- 2.3.3.Evaluation of hermite Polynomials -- 2.4.Standard Normal Distribution Function -- 2.5.Incomplete Gamma Function -- 2.6.Incomplete Beta Function -- 2.7.Connections to Other Distributions -- 2.7.1.Chi-square and Standard Normal -- 2.7.2.Poisson -- 2.7.3.Binomial and Negative Binomial -- 2.7.4.F and Student's t -- 2.7.5.Monotonic Transformations -- 2.8.Problems -- 2.9.References --
Control code
000046068296
Dimensions
25 cm
Edition
2nd ed
Extent
xx, 600 p.
Isbn
9781441959447
Lccn
2010927594
Other physical details
ill.
http://library.link/vocab/recordID
.b27156643
System control number
  • (OCoLC)651129183
  • springer1441959440

Library Locations

    • Deakin University Library - Geelong Waurn Ponds CampusBorrow it
      75 Pigdons Road, Waurn Ponds, Victoria, 3216, AU
      -38.195656 144.304955
Processing Feedback ...