沃新书屋 - 高维数据统计学
本书资料更新时间:2025-05-01 04:16:28

高维数据统计学

高维数据统计学精美图片

高维数据统计学书籍详细信息


内容简介:

本书是统计学的前沿之作,所针对的高维数据,是理论研究的热点,在实际中也有着广泛的应用。本书重点阐述了Lasso和其他L1方法的变体,也有boosting等内容。

书籍目录:

1 Introdriction 1.1 The framework 1.2 The possibiliues and challenges 1.3 About the book 1.3.1 Organization of the book 1.4 Some examples 1.4.1 Ptediction and biomarker discovery in genomics 2 Lasso for linear models 2.1 Organization of the chapter 2.2 Introduction and preliminaries 2.2.1 The Lasso estimator 2.3 Orthonormal design 2.4 Prediction 2.4.1 Practical aspects about the Lasso for prediction 2.4.2 Some results from asymptotic theory 2.5 Variable screening and ‖β—β0‖q—norms 2.5.1 Tuning parameter selection for variable screening 2.5.2 Motif regression for DNA binding sites 2.6 Variable selection 2.6.1 Neighborhood stability andirrepresentable condition 2.7 Key pfoperties and corresponding assumptions: a summary 2.8 The adaptive Lasso: a two—stage procedure 2.8.1 An illustration: simulated data and moLif regression 2.8.2 Orthonormal design 2.8.3 The adaptive Lasso: variable selection under weak conditions 2.8.4 Computation 2.8.5 Multi—step adaptive Lasso 2.8.6 Non—convex penalty functions 2.9 Thresholding the Lasso 2.10 The relaxed Lasso 2.11 Degrees of freedom of the Lasso 2.12 Path—following algorithms 2.12.1 Coordinatewise optimization and shooting algorithms 2.13 Elasric net: an extension Problems 3 Generalizedlinear models and the Lasso 3.1 Organization of the chapter 3.2 Introduction and preliminafies 3.2.1 The Lasso estimator: penalizing the negauve log—likelihood 3.3 Important examples of generalized linear models 3.3.1 Binary response variable and logistic regression 3.3.2 Poisson regression 3.3.3 Multi—category response variable and mulunomial distribution Prohlems 4 The group Lasso 4.1 Organization of the cbaptef 4.2 Introduction and pfeliminaries 4.2.1 The group Lasso penalty 4.3 Factor variables as covariates 4.3.1 Prediction of splice sites in DNA sequences 4.4 Properties of the gfoup Lasso for generalized linear models 4.5 The generalized group Lasso penalty 4.5.1 Groupwise prediction penalty and parametrization invariance 4.6 The adaptive group Lasso 4.7 Algorithms for the group Lasso 4.7.1 Block coordinate descent 4.7.2 Block coordinate gradient descent Problems 5 Additive models and many smooth univariate funchons 5.1 Organization of the chapter 5.2 Introduction and preliminaries 5.2.1 Penalized maximum likelihood for additive models 5.3 The sparsity—smoothness penalty 5.3.1 Orthogonal basis and diagonal smoorhing matrices 5.3.2 Natural cubic splines and Sobolev spaces 5.3.3 Computation 5.4 A sparsity—smoothness penaky of group Lasso type 5.4.1 Computationalalgorithm 5.4.2 Alternative approache.s 5.5 Numericalexamples 5.5.1 Simulated example 5.5.2 Motifregression 5.6 Prediction and variable selection 5.7 Generalized additive models 5.8 Linear model with varying coefficients 5.8.1 Properties for prediction 5.8.2 Multivariate linear model 5.9 Multitasklearning Problems 6 Theory for the Lasso 6.1 Organization of this chapter 6.2 Least squares and the Lasso 6.2.1 Introducuon 6.2.2 The result assuming the truth is linear 6.2.3 Linear approximation of the truth 6.2.4 A further refinemem: handling smallish coefficients 6.3 The setup for general convex loss 6.4 The margin condition 6.5 Generalized linear model without prenalty 6.6 Consistency of the Lasso of generalloss 6.7 An oracle inequality 6.8 The eq—error for 1≤q≤2 6.8.1 eq Application to least squares assuming the truth is linear 6.8.2 Applicauon to general loss and a sparse approximation of the truth 6.9 The weighted Lasso 6.10 The adaptively weighted Lasso 6.11 Concave penalties 6.11.1 Sparsity oracle inequalities forleast squares with er—penalty 6.11.2 Proofs of this section (Secuon 6.11) 6.12 Compatibility and (random) matrices 6.13 On the compatibility condition 6.13.1 Direct bounds for the compatibility constant 6.13.2 Bounds using ‖βS‖21≤s‖βs‖22 6.13.3 Sets N containing S 6.13.4 Restrictedisometry 6.13.5 Sparse eigenvalues 6.13.6 Funher coherence nouons 6.13.7 An overview of the various eigenvalue fiavored constants Problems 7 Variable selection with the Lasso 7.1 Introduction 7.2 Some results from literacure 7.3 Organization of this chapter 7.4 The bera—nun condition 7.5 The irrepresentable condition in the noiseless case 7.5.1 Definition of the irrepresentable condition 7.5.2 The KKT conditjons 7.5.3 Necessity and sufficiency for variable selection 7.5.4 The intepresemable condition implies the compatibility condition 7.5.5 The irrepresentable condition and restricted fegression 7.5.6 Selecting a superset of the true active set 7.5.7 The weighted,rrepresemable condition 7.5.8 The weighted irrepresentable condition and restricted regression 7.5.9 The weighted Lasso with "ideal" weights 7.6 Definition of the adaptive and thresholded Lasso 7.6.1 Definition of adaptive La.sso 7.6.2 Definition of the thresholded Lasso 7.6.3 Ordef symbols 7.7 A recollection of the results obtained in Chapter 6 7.8 The adaptive Lasso and thresholding: invoking sparse eigenvaLues 7.8.1 The conditions on the tuning parameters 7.8.2 The results 7.8.3 Comparison with the Lasso 7.8.4 Companson between adaptive and thresholded Lasso 7.8.5 Bounds for the number of false negatives 7.8.6 Imposing beta—min conditions 7.9 The adapove Lasso without invoking sparse eigenvalues 7.9.1 The condition on the tuning parameter 7.9.2 The results 7.10 Some concluding remarks 7.11 Technical complements for the noiseless case without sparse eigenvalues 7.11.1 Prediction error for the noiseless (weighted) Lasso 7.11.2 The number of false positives of the noiseless (weighted) Lasso 7.11.3 Thresholding the noiseless irutial estimator 7.11.4 The noiseless adaptive Lasso 7.12 Technical complements for the noisy case without sparse eigenvalues 7.1.3 Selection with concave penalties Problems 8 Theory for e1/e2—penalty procedures 8.1 Introduction 8.2 Organization and notation of this chapter 8.3 Regression with group structure 8.3.1 The loss function and penalty 8.3.2 The empirical process 8.3.3 The group Lasso compatibility condition 8.3.4 A group Lasso sparsity oracle inequality 8.3.5 Extensions 8.4 High—dimensional additive model 8.4.1 The loss function and penalty 8.4.2 The empirical process 8.4.3 The smoothed Lasso comparibility condioon 8.4.4 A smoothed group Lasso sparsity oracle inequality 8.4.5 On the choice of the penalty 8.5 Linear model with time—varying coefficients 8.5.1 The loss function and penalty 8.5.2 The empiricalprocess 8.5.3 The compatibility condition for the time—varying coefficients model 8.5.4 A sparsity oracle inequality for the time—varying coefficients model. 8.6 Multivaniate linear model and multitask learning 8.6.1 Theloss function and penalty 8.6.2 The empirical process 8.6.3 The multitask compatibility concLition 8.6.4 A multitask sparsity oracle inequality 8.7 The approximation condition for the smoothed group Lasso 8.7.1 Sobolev smoothness 8.7.2 Diagonalizedsmoothness Problems …… 9 Non convex loss functions and e1—regularization 10 Stable solutions 11 P—values for linear models and beyond 12 Boosting and greedy algorithms 13 Graphicalmodejing 14 Probability and moment inequalities Author Index Index References

作者简介:

Peter Bühlmann 和 Sara van de Geer 在ETHZ是高维统计、因果推断方面的知名专家。

其它内容:

暂无其它内容!


下载点评

  • 实用(347+)
  • 免密(131+)
  • 朗读(1809+)
  • 神器(282+)
  • 兴趣(796+)
  • 宝藏(242+)
  • 珍藏(142+)
  • 低清(840+)
  • 缺章(405+)
  • 自学(224+)
  • 带目录(411+)
  • 职场(399+)
  • 水印(811+)
  • 收藏(357+)
  • 高亮(731+)
  • 可打印(243+)
  • 精排(449+)
  • mobi(668+)

下载评论

  • 用户1730481175: ( 2024-11-02 01:12:55 )

    双语功能搭配MOBI/TXT格式,优质数字阅读体验,值得收藏。

  • 用户1718502198: ( 2024-06-16 09:43:18 )

    音频功能搭配MOBI/TXT格式,精校数字阅读体验,操作便捷。

  • 用户1718455247: ( 2024-06-15 20:40:47 )

    流畅下载EPUB/MOBI文件,完整学术推荐收藏,资源优质。

  • 用户1739610111: ( 2025-02-15 17:01:51 )

    优质的期刊资源,音频设计提升阅读体验,值得收藏。

  • 用户1744877147: ( 2025-04-17 16:05:47 )

    图文功能搭配PDF/AZW3格式,精校数字阅读体验,值得收藏。


相关书评

暂时还没有人为这本书评论!