Institutional Repository

The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments

Show simple item record

dc.contributor.advisor Olaomi, John
dc.contributor.author Mokilane, Paul Moloantoa
dc.date.accessioned 2015-03-11T09:57:50Z
dc.date.available 2015-03-11T09:57:50Z
dc.date.issued 2014-05
dc.identifier.citation Mokilane, Paul Moloantoa (2014) The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments, University of South Africa, Pretoria, <http://hdl.handle.net/10500/18362> en
dc.identifier.uri http://hdl.handle.net/10500/18362
dc.description.abstract This study empirically compares the Classical Test Theory (CTT) and the Partial Credit Model (PCM) of Rasch focusing on the invariance of item parameters. The invariance concept which is the consequence of the principle of specific objectivity was tested in both CTT and PCM using the results of learners who wrote the National Senior Certificate (NSC) Mathematics examinations in 2010. The difficulty levels of the test items were estimated from the independent samples of learn- ers. The same sample of learners used in the calibration of the difficulty levels of the test items in the PCM model were also used in the calibration of the difficulty levels of the test items in CTT model. The estimates of the difficulty levels of the test items were done using RUMM2030 in the case of PCM while SAS was used in the case of CTT. RUMM2030 and SAS are both the statistical softwares. The analysis of variance (ANOVA) was used to compare the four different design groups of test takers. In cases where the ANOVA showed a significant difference between the means of the design groups, the Tukeys groupings was used to establish where the difference came from. The research findings were that the test items' difficulty parameter estimates based on the CTT theoretical framework were not invariant across the different independent sample groups. The over- all findings from this study were that the CTT theoretical framework was unable to produce item difficulty invariant parameter estimates. The PCM estimates were very stable in the sense that for most of the items, there was no significant difference between the means of at least three design groups and the one that deviated from the rest did not deviate that much. The item parameters of the group that was representative of the population (proportional allocation) and the one where the same number of learners (50 learners) was taken from different performance categories did not differ significantly for all the items except for item 6.6 in examination question paper 2. It is apparent that for the test item parameters to be invariant of the group of test takers in PCM, the group of test takers must be heterogeneous and each performance category needed to be big enough for the proper calibration of item parameters. The higher values of the estimated item parameters in CTT were consistently found in the sample that was dominated by the high proficient learners in Mathematics ("bad") and the lowest values were consistently calculated in the design group that was dominated by the less proficient learners. This phenomenon was not apparent in the Rasch model. en
dc.format.extent 1 online resource ([viii], 110 leaves)
dc.language.iso en en
dc.subject CTT en
dc.subject IRT en
dc.subject NSC en
dc.subject Item en
dc.subject Rasch model en
dc.subject Partial Credit Model en
dc.subject Invariance en
dc.subject Specific objectivity en
dc.subject.ddc 519.50968
dc.subject.lcsh Educational tests and measurements -- South Africa
dc.subject.lcsh Rasch models -- South Africa
dc.subject.lcsh Item response theory
dc.subject.lcsh Psychometrics
dc.subject.lcsh Mathematical ability -- South Africa -- Testing
dc.subject.lcsh Education, Secondary -- South Africa -- Mathematical models
dc.title The application and empirical comparison of item parameters of Classical Test Theory and Partial Credit Model of Rasch in performance assessments en
dc.type Dissertation en
dc.description.department Mathematical Sciences
dc.description.degree M.Sc. (Statistics)


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search UnisaIR


Browse

My Account

Statistics