Información del autor
Autor Trevor Hastie |
Documentos disponibles escritos por este autor (3)



Título : An introduction to statistical learning : with applications in R Tipo de documento: documento electrónico Autores: James, Gareth ; SpringerLink (Online service) ; Witten, Daniela ; Trevor Hastie ; Tibshirani, Robert Editorial: New York, NY : Springer New York Fecha de publicación: 2013 Otro editor: Imprint: Springer Colección: Springer Texts in Statistics, ISSN 1431-875X num. 103 Número de páginas: XIV, 426 p. 150 illus., 146 illus. in color Il.: online resource ISBN/ISSN/DL: 978-1-4614-7138-7 Idioma : Inglés (eng) Palabras clave: Statistics Artificial intelligence Statistical Theory and Methods Computing/Statistics Programs Intelligence (incl. Robotics) Statistics, general Clasificación: 519.2 Probabilidad y estadística matemática Resumen: An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra Nota de contenido: Introduction -- Statistical Learning -- Linear Regression -- Classification -- Resampling Methods -- Linear Model Selection and Regularization -- Moving Beyond Linearity -- Tree-Based Methods -- Support Vector Machines -- Unsupervised Learning -- Index En línea: http://dx.doi.org/10.1007/978-1-4614-7138-7 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=32338 An introduction to statistical learning : with applications in R [documento electrónico] / James, Gareth ; SpringerLink (Online service) ; Witten, Daniela ; Trevor Hastie ; Tibshirani, Robert . - New York, NY : Springer New York : Imprint: Springer, 2013 . - XIV, 426 p. 150 illus., 146 illus. in color : online resource. - (Springer Texts in Statistics, ISSN 1431-875X; 103) .
ISBN : 978-1-4614-7138-7
Idioma : Inglés (eng)
Palabras clave: Statistics Artificial intelligence Statistical Theory and Methods Computing/Statistics Programs Intelligence (incl. Robotics) Statistics, general Clasificación: 519.2 Probabilidad y estadística matemática Resumen: An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra Nota de contenido: Introduction -- Statistical Learning -- Linear Regression -- Classification -- Resampling Methods -- Linear Model Selection and Regularization -- Moving Beyond Linearity -- Tree-Based Methods -- Support Vector Machines -- Unsupervised Learning -- Index En línea: http://dx.doi.org/10.1007/978-1-4614-7138-7 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=32338 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar The elements of statistical learning / Trevor Hastie (2009)
Título : The elements of statistical learning : data mining, inference, and prediction Tipo de documento: texto impreso Autores: Trevor Hastie, Autor ; Robert Tibshirani, Autor ; Jerome Friedman, Autor Mención de edición: 2nd ed Editorial: Berlin ; New York ; Paris : Springer Fecha de publicación: 2009 Colección: Springer series in statistics Número de páginas: XII, 745 p. Dimensiones: 24 cm ISBN/ISSN/DL: 978-0-387-84857-0 Idioma : Inglés (eng) Materias: Inferencia estadística
Inteligencia artificial
Redes neuronalesClasificación: 004.8 Inteligencia artificial. Razonamiento y aprendizaje automatizados. Sistemas inteligentes Nota de contenido: Bibliografía, índice de autores e índice. Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=10450 The elements of statistical learning : data mining, inference, and prediction [texto impreso] / Trevor Hastie, Autor ; Robert Tibshirani, Autor ; Jerome Friedman, Autor . - 2nd ed . - Berlin ; New York ; Paris : Springer, 2009 . - XII, 745 p. ; 24 cm. - (Springer series in statistics) .
ISBN : 978-0-387-84857-0
Idioma : Inglés (eng)
Materias: Inferencia estadística
Inteligencia artificial
Redes neuronalesClasificación: 004.8 Inteligencia artificial. Razonamiento y aprendizaje automatizados. Sistemas inteligentes Nota de contenido: Bibliografía, índice de autores e índice. Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=10450 Reserva
Reservar este documento
Ejemplares (5)
Signatura Medio Ubicación Sub-localización Sección Estado 004.8 HAS ele Monografías Campus Pirineos Monografías Disponible 004.8 HAS ele Monografías Campus Pirineos Monografías Disponible 004.8 HAS ele Monografías Campus Pirineos Monografías Disponible 004.8 HAS ele Monografías Campus Pirineos Monografías En préstamo hasta 08/02/2018 004.8 HAS ele Monografías Campus Pirineos Monografías Consulta en sala
Excluido de préstamo
Título : The Elements of Statistical Learning : Data Mining, Inference, and Prediction Tipo de documento: documento electrónico Autores: Trevor Hastie ; SpringerLink (Online service) ; Tibshirani, Robert ; Jerome Friedman Editorial: New York, NY : Springer New York Fecha de publicación: 2009 Otro editor: Imprint: Springer Colección: Springer Series in Statistics, ISSN 0172-7397 Número de páginas: XXII, 745 p. 282 illus Il.: online resource ISBN/ISSN/DL: 978-0-387-84858-7 Idioma : Inglés (eng) Materias: Estadística
Tratamiento automático de datosPalabras clave: Computer science Data mining Artificial intelligence Bioinformatics Computational biology Probabilities Statistics Science Intelligence (incl. Robotics) Mining and Knowledge Discovery Probability Theory Stochastic Processes Statistical Methods Biology/Bioinformatics Appl. in Life Sciences Clasificación: 519.2 Probabilidad y estadística matemática Resumen: During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for ``wide'' data (p bigger than n), including multiple testing and false discovery rates. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting Nota de contenido: Overview of Supervised Learning -- Linear Methods for Regression -- Linear Methods for Classification -- Basis Expansions and Regularization -- Kernel Smoothing Methods -- Model Assessment and Selection -- Model Inference and Averaging -- Additive Models, Trees, and Related Methods -- Boosting and Additive Trees -- Neural Networks -- Support Vector Machines and Flexible Discriminants -- Prototype Methods and Nearest-Neighbors -- Unsupervised Learning -- Random Forests -- Ensemble Learning -- Undirected Graphical Models -- High-Dimensional Problems: p ? N En línea: http://dx.doi.org/10.1007/978-0-387-84858-7 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33887 The Elements of Statistical Learning : Data Mining, Inference, and Prediction [documento electrónico] / Trevor Hastie ; SpringerLink (Online service) ; Tibshirani, Robert ; Jerome Friedman . - New York, NY : Springer New York : Imprint: Springer, 2009 . - XXII, 745 p. 282 illus : online resource. - (Springer Series in Statistics, ISSN 0172-7397) .
ISBN : 978-0-387-84858-7
Idioma : Inglés (eng)
Materias: Estadística
Tratamiento automático de datosPalabras clave: Computer science Data mining Artificial intelligence Bioinformatics Computational biology Probabilities Statistics Science Intelligence (incl. Robotics) Mining and Knowledge Discovery Probability Theory Stochastic Processes Statistical Methods Biology/Bioinformatics Appl. in Life Sciences Clasificación: 519.2 Probabilidad y estadística matemática Resumen: During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for ``wide'' data (p bigger than n), including multiple testing and false discovery rates. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting Nota de contenido: Overview of Supervised Learning -- Linear Methods for Regression -- Linear Methods for Classification -- Basis Expansions and Regularization -- Kernel Smoothing Methods -- Model Assessment and Selection -- Model Inference and Averaging -- Additive Models, Trees, and Related Methods -- Boosting and Additive Trees -- Neural Networks -- Support Vector Machines and Flexible Discriminants -- Prototype Methods and Nearest-Neighbors -- Unsupervised Learning -- Random Forests -- Ensemble Learning -- Undirected Graphical Models -- High-Dimensional Problems: p ? N En línea: http://dx.doi.org/10.1007/978-0-387-84858-7 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33887 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar