Melican et al., 1989 - Google Patents
Accuracy of item performance predictions based on the Nedelsky standard setting methodMelican et al., 1989
- Document ID
- 7504989578415800815
- Author
- Melican G
- Mills C
- Plake B
- Publication year
- Publication venue
- Educational and psychological measurement
External Links
Snippet
The Nedelsky standard setting procedure utilizes an option elimination strategy to estimate the probability that a minimally competent candidate (MCC) will answer a multiple-choice item correctly. The purpose of this study was to investigate the accuracy of predicted item …
- 238000000034 method 0 abstract description 10
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by the preceding groups
- G01N33/48—Investigating or analysing materials by specific methods not covered by the preceding groups biological material, e.g. blood, urine; Haemocytometers
- G01N33/50—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
- G01N33/5005—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing involving human or animal cells
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
- G09B7/06—Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers
- G09B7/07—Electrically-operated teaching apparatus or devices working with questions and answers of the multiple-choice answer-type, i.e. where a given question is provided with a series of answers and a choice has to be made from the answers providing for individual presentation of questions to a plurality of student stations
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
- G09B7/02—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Medley et al. | The accuracy of principals’ judgments of teacher performance | |
Cashel et al. | The Personality Assessment Inventory (PAI) and the detection of defensiveness | |
Wise et al. | Response time effort: A new measure of examinee motivation in computer-based tests | |
Linn et al. | Item bias in a test of reading comprehension | |
Rovinelli et al. | On the use of content specialists in the assessment of criterion-referenced test item validity. | |
Pannone | Predicting test performance: A content valid approach to screening applicants | |
Engelhard Jr et al. | Accuracy of bias review judges in identifying differential item functioning on teacher certification tests | |
Norcini Jr | Standards and reliability in evaluation: when rules of thumb don't apply | |
Mboya | The relative importance of global self-concept and self-concept of academic ability in predicting academic achievement | |
Nicholson et al. | Utility of MMPI–2 indicators of response distortion: Receiver operating characteristic analysis. | |
Rudner | Individual assessment accuracy | |
Holmes | Unidimensionality and vertical equating with the Rasch model | |
Melican et al. | Accuracy of item performance predictions based on the Nedelsky standard setting method | |
Drasgow et al. | A decision-theoretic approach to the use of appropriateness measurement for detecting invalid test and scale scores. | |
Prohaska | “I know I'll get an A”: Confident overestimation of final course grades | |
Gillespie | Placement testing in community colleges: A response to Hughes and Nelson | |
Kupermintz | On the reliability of categorically scored examinations | |
Huberty | Relationship of the WISC-R factors to the Adaptive Behavior Scale-School Edition in a referral sample | |
Buckley‐Sharp et al. | Methods of analysis of multiple‐choice examinations and questions | |
Allen et al. | The effect of deleting content-related items on IRT ability estimates | |
Singh et al. | Response-bias-free recognition tests to measure advertising effects | |
Livingston | A Utility-Based Approach to the Evaluation of Pass/Fail Testing Decision Procedures. COPA-75-01. | |
Al-zboon et al. | The Effect of the Percentage of Missing Data on Estimating the Standard Error of the Items' Parameters and the Test Information Function According to the Three-Parameter Logistic Model in the Item Response Theory. | |
Wolfle et al. | Within-variable, between occasion error covariances in models of educational achievement | |
Plake et al. | Effects of item context on intrajudge consistency of expert judgments via the Nedelsky standard setting method |