Using Factor Analysis Procedures to Validate Score Reporting Practice of Large Scale Examinations: Establishing the Baseline

Molefhe Mogapi *

Department of Educational Foundations, Faculty of Education, University of Botswana, Botswana.

*Author to whom correspondence should be addressed.


Abstract

Performance of candidates in large scale examinations is often reported using a composite score that represents an aggregation of several components of a subject. The components are meant to reflect the fact that subjects are made up of different topics or modalities and each modality is assessed by means of a subset of items. The subsets of items measure a candidates’ knowledge with respect to the specific domain. However, more often than not, the construct validity or psychometric independence of each specific domain has not been empirically defined although the domain has intuitive meaning. Factor analysis can be used to make sure that the score reporting practice as indicated by the number of domains is supported by the underlying factor structure. In this paper, Social Studies and Science final examinations test scores were used as dependent variables to extract underlying dimensions.  The co-variance matrix for each of the two subjects was submitted to a principal component analysis with Varimax rotation to produce factor loading. The results indicated a unidimensional factor structure for Social Studies and a three component model for Science. The findings were used to evaluate the adopted score reporting structure for each of the two subjects.

Keywords: Construct validity, score reporting practice, exploratory factor analysis, confirmatory factor analysis, scree plot.


How to Cite

Mogapi, Molefhe. 2019. “Using Factor Analysis Procedures to Validate Score Reporting Practice of Large Scale Examinations: Establishing the Baseline”. Journal of Education, Society and Behavioural Science 32 (4):1-10. https://doi.org/10.9734/jesbs/2019/v32i430184.