Computer science assessments may be overestimating student readiness
Even for computer science majors, introductory programming courses can prove a challenge. The sizable percentage of undergrads who receive D’s and F’s, or withdraw before they do, cannot continue forward with the more advanced coursework needed to earn a degree in the field. And research has indicated that even those earning C’s in an intro class often struggle with higher-level computer science courses.
With its first-year students not immune from those struggles, Nebraska’s School of Computing asked Ryan Bockmon and Stephen Cooper to develop an assessment that could evaluate whether students were ready for Computer Science I or might instead benefit from a pre-intro class.
The Husker duo chose to combine two existing instruments, both of which had been shown to effectively predict performance in computer science courses. Bockmon and Cooper then conducted a voluntary study whereby students enrolling in Computer Science I could choose to complete the new assessment at the start of the fall 2020 semester. Of the 459 enrolled students, 202 decided to participate.
To their surprise, the researchers found no meaningful link between performance on the assessment and performance in the introductory course. But the team did notice that their voluntary sample contained remarkably few students who went on to withdraw or receive D’s and F’s at the end of the semester. Curious, Bockmon and Cooper compared the final grades of the 202 students who took the assessment against the 257 who chose not to. (Students who withdrew were assigned a 0, the equivalent of an F, on a 4.0 scale.)
The results were striking:
- Students who took the assessment averaged a 3.1 (B) in Computer Science I, whereas students who did not averaged a 2.3 (C+).
- Students who did not take the assessment were 5.5 times more likely to drop out of the course and 2.7 times more likely to fail it.
Those findings, now published in Communications of the ACM, pointed to the presence of a participation bias, meaning that the participants likely differed from the non-participants in ways that affected their performance in Computer Science I. And because the sample of participants was not representative of the class as a whole, the otherwise-valid assessment failed to predict that performance.
Bockmon and Cooper suspect that participation bias is a problem for computer science departments around the country, many of which also offer voluntary pre-course assessments. Making those assessments mandatory, or increasing incentives for participation, could mitigate the problem—and ultimately better identify students who could use some early help.
Ryan Bockmon et al, What’s your placebo?, Communications of the ACM (2022). DOI: 10.1145/3528085
Citation:
Computer science assessments may be overestimating student readiness (2022, November 7)
retrieved 7 November 2022
from https://techxplore.com/news/2022-11-science-overestimating-student-readiness.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.