Resultado de la búsqueda
19 búsqueda de la palabra clave 'Psychometrics'




Linking and Aligning Scores and Scales / SpringerLink (Online service) ; Neil J. Dorans ; Mary Pommerich ; Paul W. Holland (2007)
![]()
Título : Linking and Aligning Scores and Scales Tipo de documento: documento electrónico Autores: SpringerLink (Online service) ; Neil J. Dorans ; Mary Pommerich ; Paul W. Holland Editorial: New York, NY : Springer New York Fecha de publicación: 2007 Colección: Statistics for Social and Behavioral Sciences, ISSN 2199-7357 Número de páginas: XX, 396 p Il.: online resource ISBN/ISSN/DL: 978-0-387-49771-6 Idioma : Inglés (eng) Palabras clave: Statistics Assessment Psychology Methodology Psychological measurement Psychometrics for Social Science, Behavorial Education, Public Policy, and Law Assessment, Testing Evaluation Methods/Evaluation Clasificación: 51 Matemáticas Resumen: The comparability of measurements made in differing circumstances by different methods and investigators is a fundamental pre-condition for all of science. Successful applications of technology require comparable measurements. While the applications herefocus on educational tests, score linking issues are directly applicable to medicine and many branches of behavioral science. Since the 1980s, the fields of educational and psychological measurement have enhanced and widely applied techniques for producing linked scores that are comparable. The interpretation attached to a linkage depends on how the conditions of the linkage differ from the ideal. In this book, experts in statistics and psychometrics describe classes of linkages, the history of score linkings, data collection designs, and methods used to achieve sound score linkages. They describe and critically discuss applications to a variety of domains including equating of achievement exams, linkages between computer-delivered exams and paper-and-pencil exams, concordances between the current version of the SAT® and its predecessor, concordances between the ACT® and the SAT®, vertical linkages of exams that span grade levels, and linkages of scales from high-stakes state assessments to the scales of the National Assessment of Educational Progress (NAEP). Dr. Neil J. Dorans is a Distinguished Presidential Appointee at Educational Testing Service. During his 27 years at ETS, he has had primary responsibility for the statistical work associated with the AP®, PSAT/NMSQT®, and SAT® exams. He was the architect for the recentered SAT scales. He has guest edited special issues on score linking for Applied Measurement in Education, Applied Psychological Measurement, and the Journal of Educational Measurement. Dr. Mary Pommerich is a psychometrician in the Personnel Testing Division of the Defense Manpower Data Center, where she works with the ASVAB (Armed Services Vocational Aptitude Battery) testing program. She guest edited a special issue on concordance for Applied Psychological Measurement. Her research is typically generated by practical testing problems and has focused on a wide variety of issues, including linking and concordance. Dr. Paul W. Holland is the Frederic M. Lord Chair in Measurement and Statistics at Educational Testing Service and before that professor in the School of Education and the department of Statistics at the University of California, Berkeley. His books include Discrete Multivariate Analysis, Differential Item Functioning, Perspectives on Social Network Research , and two books on test score equating. He is a fellow of the American Statistical Association and the Institute for Mathematical Statistics, was designated a National Associate of the National Academies, was awarded for his career contributions by the National Council on Measurement in Education, and was elected to the National Academy of Education Nota de contenido: Overview -- Overview -- Foundations -- A Framework and History for Score Linking -- Data Collection Designs and Linking Procedures -- Equating -- Equating: Best Practices and Challenges to Best Practices -- Practical Problems in Equating Test Scores: A Practitioner’s Perspective -- Potential Solutions to Practical Equating Issues -- Tests in Transition -- Score Linking Issues Related to Test Content Changes -- Linking Scores Derived Under Different Modes of Test Administration -- Tests in Transition: Discussion and Synthesis -- Concordance -- Sizing Up Linkages -- Concordance: The Good, the Bad, and the Ugly -- Some Further Thoughts on Concordance -- Vertical Scaling -- Practical Issues in Vertical Scaling -- Methods and Models for Vertical Scaling -- Vertical Scaling and No Child Left Behind -- Assessments Linking Group Assessments to Individual -- Linking Assessments Based on Aggregate Reporting: Background and Issues -- An Enhanced Method for Mapping State Standards onto the NAEP Scale -- Using Aggregate-Level Linkages for Estimation and Validation: Comments on Thissen and Braun & Qian -- Postscript En línea: http://dx.doi.org/10.1007/978-0-387-49771-6 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=34496 Linking and Aligning Scores and Scales [documento electrónico] / SpringerLink (Online service) ; Neil J. Dorans ; Mary Pommerich ; Paul W. Holland . - New York, NY : Springer New York, 2007 . - XX, 396 p : online resource. - (Statistics for Social and Behavioral Sciences, ISSN 2199-7357) .
ISBN : 978-0-387-49771-6
Idioma : Inglés (eng)
Palabras clave: Statistics Assessment Psychology Methodology Psychological measurement Psychometrics for Social Science, Behavorial Education, Public Policy, and Law Assessment, Testing Evaluation Methods/Evaluation Clasificación: 51 Matemáticas Resumen: The comparability of measurements made in differing circumstances by different methods and investigators is a fundamental pre-condition for all of science. Successful applications of technology require comparable measurements. While the applications herefocus on educational tests, score linking issues are directly applicable to medicine and many branches of behavioral science. Since the 1980s, the fields of educational and psychological measurement have enhanced and widely applied techniques for producing linked scores that are comparable. The interpretation attached to a linkage depends on how the conditions of the linkage differ from the ideal. In this book, experts in statistics and psychometrics describe classes of linkages, the history of score linkings, data collection designs, and methods used to achieve sound score linkages. They describe and critically discuss applications to a variety of domains including equating of achievement exams, linkages between computer-delivered exams and paper-and-pencil exams, concordances between the current version of the SAT® and its predecessor, concordances between the ACT® and the SAT®, vertical linkages of exams that span grade levels, and linkages of scales from high-stakes state assessments to the scales of the National Assessment of Educational Progress (NAEP). Dr. Neil J. Dorans is a Distinguished Presidential Appointee at Educational Testing Service. During his 27 years at ETS, he has had primary responsibility for the statistical work associated with the AP®, PSAT/NMSQT®, and SAT® exams. He was the architect for the recentered SAT scales. He has guest edited special issues on score linking for Applied Measurement in Education, Applied Psychological Measurement, and the Journal of Educational Measurement. Dr. Mary Pommerich is a psychometrician in the Personnel Testing Division of the Defense Manpower Data Center, where she works with the ASVAB (Armed Services Vocational Aptitude Battery) testing program. She guest edited a special issue on concordance for Applied Psychological Measurement. Her research is typically generated by practical testing problems and has focused on a wide variety of issues, including linking and concordance. Dr. Paul W. Holland is the Frederic M. Lord Chair in Measurement and Statistics at Educational Testing Service and before that professor in the School of Education and the department of Statistics at the University of California, Berkeley. His books include Discrete Multivariate Analysis, Differential Item Functioning, Perspectives on Social Network Research , and two books on test score equating. He is a fellow of the American Statistical Association and the Institute for Mathematical Statistics, was designated a National Associate of the National Academies, was awarded for his career contributions by the National Council on Measurement in Education, and was elected to the National Academy of Education Nota de contenido: Overview -- Overview -- Foundations -- A Framework and History for Score Linking -- Data Collection Designs and Linking Procedures -- Equating -- Equating: Best Practices and Challenges to Best Practices -- Practical Problems in Equating Test Scores: A Practitioner’s Perspective -- Potential Solutions to Practical Equating Issues -- Tests in Transition -- Score Linking Issues Related to Test Content Changes -- Linking Scores Derived Under Different Modes of Test Administration -- Tests in Transition: Discussion and Synthesis -- Concordance -- Sizing Up Linkages -- Concordance: The Good, the Bad, and the Ugly -- Some Further Thoughts on Concordance -- Vertical Scaling -- Practical Issues in Vertical Scaling -- Methods and Models for Vertical Scaling -- Vertical Scaling and No Child Left Behind -- Assessments Linking Group Assessments to Individual -- Linking Assessments Based on Aggregate Reporting: Background and Issues -- An Enhanced Method for Mapping State Standards onto the NAEP Scale -- Using Aggregate-Level Linkages for Estimation and Validation: Comments on Thissen and Braun & Qian -- Postscript En línea: http://dx.doi.org/10.1007/978-0-387-49771-6 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=34496 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar
Título : Looking Back : Proceedings of a Conference in Honor of Paul W. Holland Tipo de documento: documento electrónico Autores: SpringerLink (Online service) ; Neil J. Dorans ; Sandip Sinharay Editorial: New York, NY : Springer New York Fecha de publicación: 2011 Otro editor: Imprint: Springer Colección: Lecture Notes in Statistics, ISSN 0930-0325 num. 202 Número de páginas: XVIII, 283 p. 37 illus Il.: online resource ISBN/ISSN/DL: 978-1-4419-9389-2 Idioma : Inglés (eng) Palabras clave: Statistics Mathematics Assessment Social sciences Psychometrics for Science, Behavorial Education, Public Policy, and Law Assessment, Testing Evaluation Methodology of the Sciences Mathematics, general Clasificación: 51 Matemáticas Resumen: In 2006, Paul W. Holland retired from Educational Testing Service (ETS) after a career spanning five decades. In 2008, ETS sponsored a conference, Looking Back, honoring his contributions to applied and theoretical psychometrics and statistics. Looking Back attracted a large audience that came to pay homage to Paul Holland and to hear presentations by colleagues who worked with him in special ways over those 40+ years. This book contains papers based on these presentations, as well as vignettes provided by Paul Holland before each section. The papers in this book attest to how Paul Holland's pioneering ideas influenced and continue to influence several fields such as social networks, causal inference, item response theory, equating, and DIF. He applied statistical thinking to a broad range of ETS activities in test development, statistical analysis, test security, and operations. The original papers contained in this book provide historical context for Paul Holland’s work alongside commentary on some of his major contributions by noteworthy statisticians working today Nota de contenido: The Contributions of Paul Holland -- Algebraic Statistics for p1 Random Graph Models -- Mr. Holland's Networks: A Brief Review of the Importance of Statistical Studies of Local Subgraphs or One Small Tune in a Large Opus -- Some of My Favorite Things About Working at ETS -- Bayesian Analysis of a Two-Group Randomized Encouragement Design -- The Role of Nonparametric Analysis in Assessment Modeling: Then and Now -- What Aspects of the Design of an Observational Study Affect Its Sensitivity to Bias From Covariates That Were Not Observed? -- The Origins of Procedures for Using Differential Item Functioning Statistics at Educational Testing Service -- Why I Left ETS and Returned -- Cause or Effect? Validating the Use of Tests for High-Stakes Inferences in Education -- Propensity Score Matching to Extract Latent Experiments From Nonexperimental Data: A Case Study -- Returning to ETS from Berkeley -- Loglinear Models as Smooth Operators: Holland's Statistical Applications and Their Practical Uses -- Chain Equipercentile Equating and Frequency Estimation Equipercentile Equating: Comparisons Based on Real and Simulated Data -- An Observed-Score Equating Framework -- Great Colleagues Make a Great Institution -- An Exploratory Analysis of Charter Schools -- Holland's Advice for the Fourth Generation of Test Theory: Blood Tests Can Be Contests En línea: http://dx.doi.org/10.1007/978-1-4419-9389-2 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33183 Looking Back : Proceedings of a Conference in Honor of Paul W. Holland [documento electrónico] / SpringerLink (Online service) ; Neil J. Dorans ; Sandip Sinharay . - New York, NY : Springer New York : Imprint: Springer, 2011 . - XVIII, 283 p. 37 illus : online resource. - (Lecture Notes in Statistics, ISSN 0930-0325; 202) .
ISBN : 978-1-4419-9389-2
Idioma : Inglés (eng)
Palabras clave: Statistics Mathematics Assessment Social sciences Psychometrics for Science, Behavorial Education, Public Policy, and Law Assessment, Testing Evaluation Methodology of the Sciences Mathematics, general Clasificación: 51 Matemáticas Resumen: In 2006, Paul W. Holland retired from Educational Testing Service (ETS) after a career spanning five decades. In 2008, ETS sponsored a conference, Looking Back, honoring his contributions to applied and theoretical psychometrics and statistics. Looking Back attracted a large audience that came to pay homage to Paul Holland and to hear presentations by colleagues who worked with him in special ways over those 40+ years. This book contains papers based on these presentations, as well as vignettes provided by Paul Holland before each section. The papers in this book attest to how Paul Holland's pioneering ideas influenced and continue to influence several fields such as social networks, causal inference, item response theory, equating, and DIF. He applied statistical thinking to a broad range of ETS activities in test development, statistical analysis, test security, and operations. The original papers contained in this book provide historical context for Paul Holland’s work alongside commentary on some of his major contributions by noteworthy statisticians working today Nota de contenido: The Contributions of Paul Holland -- Algebraic Statistics for p1 Random Graph Models -- Mr. Holland's Networks: A Brief Review of the Importance of Statistical Studies of Local Subgraphs or One Small Tune in a Large Opus -- Some of My Favorite Things About Working at ETS -- Bayesian Analysis of a Two-Group Randomized Encouragement Design -- The Role of Nonparametric Analysis in Assessment Modeling: Then and Now -- What Aspects of the Design of an Observational Study Affect Its Sensitivity to Bias From Covariates That Were Not Observed? -- The Origins of Procedures for Using Differential Item Functioning Statistics at Educational Testing Service -- Why I Left ETS and Returned -- Cause or Effect? Validating the Use of Tests for High-Stakes Inferences in Education -- Propensity Score Matching to Extract Latent Experiments From Nonexperimental Data: A Case Study -- Returning to ETS from Berkeley -- Loglinear Models as Smooth Operators: Holland's Statistical Applications and Their Practical Uses -- Chain Equipercentile Equating and Frequency Estimation Equipercentile Equating: Comparisons Based on Real and Simulated Data -- An Observed-Score Equating Framework -- Great Colleagues Make a Great Institution -- An Exploratory Analysis of Charter Schools -- Holland's Advice for the Fourth Generation of Test Theory: Blood Tests Can Be Contests En línea: http://dx.doi.org/10.1007/978-1-4419-9389-2 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33183 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar
Título : Multidimensional Item Response Theory Tipo de documento: documento electrónico Autores: M. D. Reckase ; SpringerLink (Online service) Editorial: New York, NY : Springer New York Fecha de publicación: 2009 Colección: Statistics for Social and Behavioral Sciences, ISSN 2199-7357 Número de páginas: X, 354 p Il.: online resource ISBN/ISSN/DL: 978-0-387-89976-3 Idioma : Inglés (eng) Palabras clave: Statistics Computer simulation Social sciences Psychology Methodology Psychological measurement Psychometrics for Science, Behavorial Education, Public Policy, and Law Simulation Modeling Methods/Evaluation of the Sciences Clasificación: 51 Matemáticas Resumen: Multidimensional Item Response Theory is the first book to give thorough coverage to this emerging area of psychometrics. The book describes the commonly used multidimensional item response theory (MIRT) models and the important methods needed for their practical application. These methods include ways to determine the number of dimensions required to adequately model data, procedures for estimating model parameters, ways to define the space for a MIRT model, and procedures for transforming calibrations from different samples to put them in the same space. A full chapter is devoted to methods for multidimensional computerized adaptive testing. The text is appropriate for an advanced course in psychometric theory or as a reference work for those interested in applying MIRT methodology. A working knowledge of unidimensional item response theory and matrix algebra is assumed. Knowledge of factor analysis is also helpful. Mark D. Reckase is a professor of Measurement and Quantitative Methods in the College of Education at Michigan State University. He has been president of the National Council of Measurement in Education, Vice President of Division D of the American Educational Research Association, on the Board of Trustees of the Psychometric Society, and the editor of Applied Psychological Measurement and the Journal of Educational Measurement. He has been doing research in the area of MIRT since 1972 Nota de contenido: Unidimensional Item Response Theory Models -- Historical Background for Multidimensional Item Response Theory (MIRT) -- Multidimensional Item Response Theory Models -- Statistical Descriptions of Item and Test Functioning -- Estimation of Item and Person Parameters -- Analyzing the Structure of Test Data -- Transforming Parameter Estimates to a Specified Coordinate System -- Linking and Scaling -- Computerized Adaptive Testing Using MIRT En línea: http://dx.doi.org/10.1007/978-0-387-89976-3 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33921 Multidimensional Item Response Theory [documento electrónico] / M. D. Reckase ; SpringerLink (Online service) . - New York, NY : Springer New York, 2009 . - X, 354 p : online resource. - (Statistics for Social and Behavioral Sciences, ISSN 2199-7357) .
ISBN : 978-0-387-89976-3
Idioma : Inglés (eng)
Palabras clave: Statistics Computer simulation Social sciences Psychology Methodology Psychological measurement Psychometrics for Science, Behavorial Education, Public Policy, and Law Simulation Modeling Methods/Evaluation of the Sciences Clasificación: 51 Matemáticas Resumen: Multidimensional Item Response Theory is the first book to give thorough coverage to this emerging area of psychometrics. The book describes the commonly used multidimensional item response theory (MIRT) models and the important methods needed for their practical application. These methods include ways to determine the number of dimensions required to adequately model data, procedures for estimating model parameters, ways to define the space for a MIRT model, and procedures for transforming calibrations from different samples to put them in the same space. A full chapter is devoted to methods for multidimensional computerized adaptive testing. The text is appropriate for an advanced course in psychometric theory or as a reference work for those interested in applying MIRT methodology. A working knowledge of unidimensional item response theory and matrix algebra is assumed. Knowledge of factor analysis is also helpful. Mark D. Reckase is a professor of Measurement and Quantitative Methods in the College of Education at Michigan State University. He has been president of the National Council of Measurement in Education, Vice President of Division D of the American Educational Research Association, on the Board of Trustees of the Psychometric Society, and the editor of Applied Psychological Measurement and the Journal of Educational Measurement. He has been doing research in the area of MIRT since 1972 Nota de contenido: Unidimensional Item Response Theory Models -- Historical Background for Multidimensional Item Response Theory (MIRT) -- Multidimensional Item Response Theory Models -- Statistical Descriptions of Item and Test Functioning -- Estimation of Item and Person Parameters -- Analyzing the Structure of Test Data -- Transforming Parameter Estimates to a Specified Coordinate System -- Linking and Scaling -- Computerized Adaptive Testing Using MIRT En línea: http://dx.doi.org/10.1007/978-0-387-89976-3 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33921 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar
Título : Multivariate and Mixture Distribution Rasch Models : Extensions and Applications Tipo de documento: documento electrónico Autores: Matthias von Davier ; SpringerLink (Online service) ; Claus H. Carstensen Editorial: New York, NY : Springer New York Fecha de publicación: 2007 Otro editor: Imprint: Springer Colección: Statistics for Social and Behavioral Sciences, ISSN 2199-7357 Número de páginas: XIII, 398 p. 42 illus Il.: online resource ISBN/ISSN/DL: 978-0-387-49839-3 Idioma : Inglés (eng) Palabras clave: Statistics Medical research Education Quality of life Psychology Methodology Psychological measurement Psychometrics for Social Science, Behavorial Education, Public Policy, and Law general Life Research Methods/Evaluation Clasificación: 51 Matemáticas Resumen: This volume covers extensions of the Rasch model, one of the most researched and applied models in educational research and social science. This collection contains 22 chapters by some of the most recognized international experts in the field. They cover topics ranging from general model extensions to applications in fields as diverse as cognition, personality, organizational and sports psychology, and health sciences and education. The Rasch model is designed for categorical data, often collected as examinees' responses to multiple tasks such as cognitive items from psychological tests or from educational assessments. The Rasch model's elegant mathematical form is suitable for extensions that allow for greater flexibility in handling complex samples of examinees and collections of tasks from different domains. In these extensions, the Rasch model is enhanced by additional structural elements that either account for differences between diverse populations or for differences among observed variables. Research on extending well-known statistical tools like regression, mixture distribution, and hierarchical linear models has led to the adoption of Rasch model features to handle categorical observed variables. We maintain both perspectives in the volume and show how these merged models—Rasch models with a more complex item or population structure—are derived either from the Rasch model or from a structural model, how they are estimated, and where they are applied. Matthias von Davier is a Senior Research Scientist in the Research & Development Division at Educational Testing Service. He is the author of WINMIRA, a software package for estimating latent class models, mixture distribution Rasch models, and hybrid Rasch models. The software grew out of his work with colleagues at the Methodology Department of the Institute for Science Education (IPN) in Kiel, Germany. Von Davier's current research is concerned with extensions of Rasch models and more general Item Response Theory (IRT) models to multidimensional, diagnostic models and with mixture distribution models, with statistical computation and estimation, and with applications of psychometric models in national and international educational assessments. Claus H. Carstensen is a junior Professor in the Psychometrics and Methodology Department at the IPN, Carstensen's work is concerned with multidimensional extensions of the Rasch model and applications of these models in intelligence and expertise research and educational assessments. He and Juergen Rost, head of the IPN's Methodology Department at the time, developed MULTIRA, a software package for multidimensional Rasch models. Before his current position, Carstensen was a Research Officer at the Australian Council of Educational Research where his focus was large-scale data analysis using multidimensional extensions of the Rasch model Nota de contenido: Introduction: Extending the Rasch Model -- Introduction: Extending the Rasch Model -- Multivariate and Mixture Rasch Models -- Measurement Models as Narrative Structures -- Testing Generalized Rasch Models -- The Mixed-Coefficients Multinomial Logit Model: A Generalized Form of the Rasch Model -- Loglinear Multivariate and Mixture Rasch Models -- Mixture-Distribution and HYBRID Rasch Models -- Generalized Models—Specific Research Questions -- Application of the Saltus Model to Stagelike Data: Some Applications and Current Developments -- Determination of Diagnostic Cut-Points Using Stochastically Ordered Mixed Rasch Models -- A HYBRID Model for Test Speededness -- Multidimensional Three-Mode Rasch Models -- (Almost) Equivalence Between Conditional and Mixture Maximum Likelihood Estimates for Some Models of the Rasch Type -- Rasch Models for Longitudinal Data -- The Interaction Model -- Multilevel Rasch Models -- Applications of Multivariate and Mixed Rasch Models -- Mixed Rasch Models for Measurement in Cognitive Psychology -- Detecting Response Styles and Faking in Personality and Organizational Assessments by Mixed Rasch Models -- Application of Multivariate Rasch Models in International Large-Scale Educational Assessments -- Studying Development via Item Response Models: A Wide Range of Potential Uses -- A Comparison of the Rasch Model and Constrained Item Response Theory Models for Pertinent Psychological Test Data -- Latent-Response Rasch Models for Strategy Shifts in Problem-Solving Processes -- Validity and Objectivity in Health-Related Scales: Analysis by Graphical Loglinear Rasch Models -- Applications of Generalized Rasch Models in the Sport, Exercise, and the Motor Domains En línea: http://dx.doi.org/10.1007/978-0-387-49839-3 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=34498 Multivariate and Mixture Distribution Rasch Models : Extensions and Applications [documento electrónico] / Matthias von Davier ; SpringerLink (Online service) ; Claus H. Carstensen . - New York, NY : Springer New York : Imprint: Springer, 2007 . - XIII, 398 p. 42 illus : online resource. - (Statistics for Social and Behavioral Sciences, ISSN 2199-7357) .
ISBN : 978-0-387-49839-3
Idioma : Inglés (eng)
Palabras clave: Statistics Medical research Education Quality of life Psychology Methodology Psychological measurement Psychometrics for Social Science, Behavorial Education, Public Policy, and Law general Life Research Methods/Evaluation Clasificación: 51 Matemáticas Resumen: This volume covers extensions of the Rasch model, one of the most researched and applied models in educational research and social science. This collection contains 22 chapters by some of the most recognized international experts in the field. They cover topics ranging from general model extensions to applications in fields as diverse as cognition, personality, organizational and sports psychology, and health sciences and education. The Rasch model is designed for categorical data, often collected as examinees' responses to multiple tasks such as cognitive items from psychological tests or from educational assessments. The Rasch model's elegant mathematical form is suitable for extensions that allow for greater flexibility in handling complex samples of examinees and collections of tasks from different domains. In these extensions, the Rasch model is enhanced by additional structural elements that either account for differences between diverse populations or for differences among observed variables. Research on extending well-known statistical tools like regression, mixture distribution, and hierarchical linear models has led to the adoption of Rasch model features to handle categorical observed variables. We maintain both perspectives in the volume and show how these merged models—Rasch models with a more complex item or population structure—are derived either from the Rasch model or from a structural model, how they are estimated, and where they are applied. Matthias von Davier is a Senior Research Scientist in the Research & Development Division at Educational Testing Service. He is the author of WINMIRA, a software package for estimating latent class models, mixture distribution Rasch models, and hybrid Rasch models. The software grew out of his work with colleagues at the Methodology Department of the Institute for Science Education (IPN) in Kiel, Germany. Von Davier's current research is concerned with extensions of Rasch models and more general Item Response Theory (IRT) models to multidimensional, diagnostic models and with mixture distribution models, with statistical computation and estimation, and with applications of psychometric models in national and international educational assessments. Claus H. Carstensen is a junior Professor in the Psychometrics and Methodology Department at the IPN, Carstensen's work is concerned with multidimensional extensions of the Rasch model and applications of these models in intelligence and expertise research and educational assessments. He and Juergen Rost, head of the IPN's Methodology Department at the time, developed MULTIRA, a software package for multidimensional Rasch models. Before his current position, Carstensen was a Research Officer at the Australian Council of Educational Research where his focus was large-scale data analysis using multidimensional extensions of the Rasch model Nota de contenido: Introduction: Extending the Rasch Model -- Introduction: Extending the Rasch Model -- Multivariate and Mixture Rasch Models -- Measurement Models as Narrative Structures -- Testing Generalized Rasch Models -- The Mixed-Coefficients Multinomial Logit Model: A Generalized Form of the Rasch Model -- Loglinear Multivariate and Mixture Rasch Models -- Mixture-Distribution and HYBRID Rasch Models -- Generalized Models—Specific Research Questions -- Application of the Saltus Model to Stagelike Data: Some Applications and Current Developments -- Determination of Diagnostic Cut-Points Using Stochastically Ordered Mixed Rasch Models -- A HYBRID Model for Test Speededness -- Multidimensional Three-Mode Rasch Models -- (Almost) Equivalence Between Conditional and Mixture Maximum Likelihood Estimates for Some Models of the Rasch Type -- Rasch Models for Longitudinal Data -- The Interaction Model -- Multilevel Rasch Models -- Applications of Multivariate and Mixed Rasch Models -- Mixed Rasch Models for Measurement in Cognitive Psychology -- Detecting Response Styles and Faking in Personality and Organizational Assessments by Mixed Rasch Models -- Application of Multivariate Rasch Models in International Large-Scale Educational Assessments -- Studying Development via Item Response Models: A Wide Range of Potential Uses -- A Comparison of the Rasch Model and Constrained Item Response Theory Models for Pertinent Psychological Test Data -- Latent-Response Rasch Models for Strategy Shifts in Problem-Solving Processes -- Validity and Objectivity in Health-Related Scales: Analysis by Graphical Loglinear Rasch Models -- Applications of Generalized Rasch Models in the Sport, Exercise, and the Motor Domains En línea: http://dx.doi.org/10.1007/978-0-387-49839-3 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=34498 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar Statistical Models for Test Equating, Scaling, and Linking / SpringerLink (Online service) ; Alina A. von Davier (2011)
![]()
Título : Statistical Models for Test Equating, Scaling, and Linking Tipo de documento: documento electrónico Autores: SpringerLink (Online service) ; Alina A. von Davier Editorial: New York, NY : Springer New York Fecha de publicación: 2011 Otro editor: Imprint: Springer Colección: Statistics for Social and Behavioral Sciences, ISSN 2199-7357 Número de páginas: XX, 368 p Il.: online resource ISBN/ISSN/DL: 978-0-387-98138-3 Idioma : Inglés (eng) Palabras clave: Education Assessment Statistics Psychometrics Assessment, Testing and Evaluation for Social Science, Behavorial Education, Public Policy, Law Clasificación: 51 Matemáticas Resumen: The goal of this book is to emphasize the formal statistical features of the practice of equating, linking, and scaling. The book encourages the view and discusses the quality of the equating results from the statistical perspective (new models, robustness, fit, testing hypotheses, statistical monitoring) as opposed to placing the focus on the policy and the implications, which although very important, represent a different side of the equating practice. The book contributes to establishing “equating” as a theoretical field, a view that has not been offered often before. The tradition in the practice of equating has been to present the knowledge and skills needed as a craft, which implies that only with years of experience under the guidance of a knowledgeable practitioner could one acquire the required skills. This book challenges this view by indicating how a good equating framework, a sound understanding of the assumptions that underlie the psychometric models, and the use of statistical tests and statistical process control tools can help the practitioner navigate the difficult decisions in choosing the final equating function. This book provides a valuable reference for several groups: (a) statisticians and psychometricians interested in the theory behind equating methods, in the use of model-based statistical methods for data smoothing, and in the evaluation of the equating results in applied work; (b) practitioners who need to equate tests, including those with these responsibilities in testing companies, state testing agencies, and school districts; and (c) instructors in psychometric, measurement, and psychology programs. Dr. Alina A. von Davier is a Strategic Advisor and a Director of Special Projects in Research and Development at Educational Testing Service (ETS). During her tenure at ETS, she has led an ETS Research Initiative called “Equating and Applied Psychometrics” and has directed the Global Psychometric Services Center. The center supports the psychometric work for all ETS international programs, including TOEFL iBT and TOEIC. She is a co-author of a book on the kernel method of test equating, an author of a book on hypotheses testing in regression models, and a guest co-editor for a special issue on population invariance of linking functions for the journal Applied Psychological Measurement Nota de contenido: Overview -- A Statistical Perspective on Equating Test Scores (Alina A. von Davier) -- Part I: Research Questions and Data Collection Designs -- Equating Test Scores: Toward Best Practices Neil J. Dorans, Tim P. Moses, and Daniel R. Eignor) -- Scoring and Scaling Educational Tests Michael J. Kolen, Ye Tong, and Robert L. Brennan) -- Statistical Models for Vertical Linking James E. Carlson) -- An Empirical Example of Change Analysis by Linking Longitudinal Item Response Data From Multiple Tests (John J. McArdle and Kevin J. Grimm) -- How to Average Equating Functions, If You Must (Paul W. Holland and William E. Strawderman) -- New Approaches to Equating With Small Samples (Samuel A. Livingston and Sooyeon Kim) -- Part II: Measurement and Equating Models -- Using Exponential Families for Equating (Shelby J. Haberman) -- An Alternative Continuization Method: The Continuized Log-Linear Method (Tianyou Wang) -- Equating Through Alternative Kernels (Yi-Hsuan Lee and Alina A. von Davier) -- A Bayesian Nonparametric Model for Test Equating (George Karabatsos and Stephen G. Walker) -- Generalized Equating Functions for NEAT Designs (Haiwen H. Chen, Samuel A. Livingston, and Paul W. Holland) -- Local Observed-Score Equating (Wim J. van der Linden) -- A General Model for IRT Scale Linking and Scale Transformations (Matthias von Davier and Alina A. von Davier) -- Linking With Nonparametric IRT Models (Xueli Xu, Jeff A. Douglas, and Young-Sun Lee) -- Part III: Evaluation -- Applications of Asymptotic Expansion in Item Response Theory Linking (Haruhiko Ogasawara) -- Evaluating the Missing Data Assumptions of the Chain and Poststratification Equating Methods (Sandip Sinharay, Paul W. Holland, and Alina A. von Davier) -- Robustness of IRT Observed-Score Equating (C. A. W. Glas and Anton A. Beguin) -- Hypothesis Testing of Equating Differences in the Kernel Equating Framework (Frank Rijmen, Yanxuan Qu, and Alina A. von Davier) -- Applying Time-Series Analysis to Detect Scale Drift (Deping Li, Shuhong Li, and Alina A. von Davier) En línea: http://dx.doi.org/10.1007/978-0-387-98138-3 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33085 Statistical Models for Test Equating, Scaling, and Linking [documento electrónico] / SpringerLink (Online service) ; Alina A. von Davier . - New York, NY : Springer New York : Imprint: Springer, 2011 . - XX, 368 p : online resource. - (Statistics for Social and Behavioral Sciences, ISSN 2199-7357) .
ISBN : 978-0-387-98138-3
Idioma : Inglés (eng)
Palabras clave: Education Assessment Statistics Psychometrics Assessment, Testing and Evaluation for Social Science, Behavorial Education, Public Policy, Law Clasificación: 51 Matemáticas Resumen: The goal of this book is to emphasize the formal statistical features of the practice of equating, linking, and scaling. The book encourages the view and discusses the quality of the equating results from the statistical perspective (new models, robustness, fit, testing hypotheses, statistical monitoring) as opposed to placing the focus on the policy and the implications, which although very important, represent a different side of the equating practice. The book contributes to establishing “equating” as a theoretical field, a view that has not been offered often before. The tradition in the practice of equating has been to present the knowledge and skills needed as a craft, which implies that only with years of experience under the guidance of a knowledgeable practitioner could one acquire the required skills. This book challenges this view by indicating how a good equating framework, a sound understanding of the assumptions that underlie the psychometric models, and the use of statistical tests and statistical process control tools can help the practitioner navigate the difficult decisions in choosing the final equating function. This book provides a valuable reference for several groups: (a) statisticians and psychometricians interested in the theory behind equating methods, in the use of model-based statistical methods for data smoothing, and in the evaluation of the equating results in applied work; (b) practitioners who need to equate tests, including those with these responsibilities in testing companies, state testing agencies, and school districts; and (c) instructors in psychometric, measurement, and psychology programs. Dr. Alina A. von Davier is a Strategic Advisor and a Director of Special Projects in Research and Development at Educational Testing Service (ETS). During her tenure at ETS, she has led an ETS Research Initiative called “Equating and Applied Psychometrics” and has directed the Global Psychometric Services Center. The center supports the psychometric work for all ETS international programs, including TOEFL iBT and TOEIC. She is a co-author of a book on the kernel method of test equating, an author of a book on hypotheses testing in regression models, and a guest co-editor for a special issue on population invariance of linking functions for the journal Applied Psychological Measurement Nota de contenido: Overview -- A Statistical Perspective on Equating Test Scores (Alina A. von Davier) -- Part I: Research Questions and Data Collection Designs -- Equating Test Scores: Toward Best Practices Neil J. Dorans, Tim P. Moses, and Daniel R. Eignor) -- Scoring and Scaling Educational Tests Michael J. Kolen, Ye Tong, and Robert L. Brennan) -- Statistical Models for Vertical Linking James E. Carlson) -- An Empirical Example of Change Analysis by Linking Longitudinal Item Response Data From Multiple Tests (John J. McArdle and Kevin J. Grimm) -- How to Average Equating Functions, If You Must (Paul W. Holland and William E. Strawderman) -- New Approaches to Equating With Small Samples (Samuel A. Livingston and Sooyeon Kim) -- Part II: Measurement and Equating Models -- Using Exponential Families for Equating (Shelby J. Haberman) -- An Alternative Continuization Method: The Continuized Log-Linear Method (Tianyou Wang) -- Equating Through Alternative Kernels (Yi-Hsuan Lee and Alina A. von Davier) -- A Bayesian Nonparametric Model for Test Equating (George Karabatsos and Stephen G. Walker) -- Generalized Equating Functions for NEAT Designs (Haiwen H. Chen, Samuel A. Livingston, and Paul W. Holland) -- Local Observed-Score Equating (Wim J. van der Linden) -- A General Model for IRT Scale Linking and Scale Transformations (Matthias von Davier and Alina A. von Davier) -- Linking With Nonparametric IRT Models (Xueli Xu, Jeff A. Douglas, and Young-Sun Lee) -- Part III: Evaluation -- Applications of Asymptotic Expansion in Item Response Theory Linking (Haruhiko Ogasawara) -- Evaluating the Missing Data Assumptions of the Chain and Poststratification Equating Methods (Sandip Sinharay, Paul W. Holland, and Alina A. von Davier) -- Robustness of IRT Observed-Score Equating (C. A. W. Glas and Anton A. Beguin) -- Hypothesis Testing of Equating Differences in the Kernel Equating Framework (Frank Rijmen, Yanxuan Qu, and Alina A. von Davier) -- Applying Time-Series Analysis to Detect Scale Drift (Deping Li, Shuhong Li, and Alina A. von Davier) En línea: http://dx.doi.org/10.1007/978-0-387-98138-3 Link: https://biblioteca.cunef.edu/gestion/catalogo/index.php?lvl=notice_display&id=33085 Ejemplares
Signatura Medio Ubicación Sub-localización Sección Estado ningún ejemplar PermalinkElements of Adaptive Testing / SpringerLink (Online service) ; Wim J. van der Linden ; Cees A. W. Glas (2010)
![]()
PermalinkPermalinkPermalinkPermalink