2
ACT Research & Policy | Technical Brief | May 2020
Over the last several years, ACT has been conducting research to examine the validity and
fairness of dierent scoring practices and options to help provide insights on how postsecondary
institutions might best make use of multiple ACT scores when students retest. Results from
the studies conducted to date support oering the new options of section retesting and
superscoring. First, in a large multi-institutional study (Mattern, Radunzel, Bertling, & Ho, 2018),
superscoring was found to be as predictive—if not more predictive—of rst-year GPA than
the other ACT Composite scoring methods examined, which included computing the average
Composite score across test administrations or using students’ most recent Composite score
or their highest Composite score; correlations ranged from 0.39 for the average to 0.41 for
superscoring. The study also found that rst-year GPA for students who tested more often was
underpredicted, but that when examining the prediction accuracy by the number of times tested,
superscoring resulted in the least amount of prediction error across the four scoring methods.
These results suggest that ACT subject scores do not have to come from a single test attempt
to be a valid indicator of students’ college readiness, supporting both superscoring and section
retesting.
The Mattern et al. (2018) study also explored the diversity implications for an admitted class of
using superscores as compared to the other three scoring methods to admit students. Despite
the fact that underserved students are less likely to retest (Harmston & Crouse, 2016), the
authors found that superscoring did not result in a less diverse admitted class as compared
to the other three scoring methods. In a subsequent study (Mattern & Radunzel, 2019), the
researchers found that superscoring did not exacerbate subgroup dierences for the national
ACT-tested population over those reported based on students’ most recent ACT scores.
Second, results from a randomized study of 4,000 students conducted in 2016 indicated that
the order in which the subject tests were administered did not impact student performance
(Andrews, 2019). More specically, the study found that students earned subject scores that
were similar regardless of the order in which the subject tests were taken. Given that ACT
scores were similar when taken rst as compared to the standard position in the full ACT test,
the ndings from this study support the option of oering section retesting where students will
not have to retake the entire ACT test but can focus their learning eorts on specic subject
areas of their choice. Despite concerns being raised that section retesting may lead to articially
inated scores, two recent studies (Mattern, Radunzel, & Andrews, 2019; Radunzel & Mattern,
2020) provide empirical evidence suggesting that this is actually not the case. In particular,
the results from these two studies demonstrate that students’ performance when retesting in a
single ACT subject area tends to be consistent with what would be expected based on typical
test-retest score gains from taking the entire ACT test.
While decades of research provide evidence that each individual ACT test is a valid and
reliable measure of students’ college readiness and related to college outcomes (ACT, 2019;
see chapters 10 and 11), there is a need to examine the predictive validity of section retest
scores. Moreover, given that the prior study on superscoring (Mattern et al., 2018) was based
only on full administrations of the ACT, it is of interest to investigate the relationship between
rst-year college outcomes and ACT Superscores that combine scores not only across full
test administrations but also across section retests. To address these topics, we conducted
a concurrent validity study in collaboration with a single four-year public university involving
students from their fall 2019 freshman cohort. In particular, the following two research questions
were examined in this case study: