Absence and Growth: An Inconclusive, Pseudo-Scientific Analysis

March 2, 2011

When I turned 18 and headed off to college, I tried to declare my de facto freedom from parental control, but Dad was paying my tuition. He still ruled, and one absolute rule, in his mind at least, was this: DO NOT SKIP CLASS.  No excuses.

He knew what all teachers and most parents know—you can’t learn if you aren’t present. This belief—this assumption—is what led me to begin analyzing data about absence and EOC growth. The question: to what extent does student absence impact the likelihood that a student will achieve growth on an end of course test.

Methods
Using NCWise attendance data for each EOC course—so data that does not include absences due to ISS or Choice—I added to our EOC growth spreadsheet a column of data listing the number of absences for each student beside a column that indicated (Y or N) if the student had or had not made growth on the exam for that course. Students without predictor or exam scores were filtered out.

This information was used to determine the average number of absences for students who made growth and for those who did not make growth.

Then I grouped students according to degrees of absence: minimal (0-2), some (3-5), several (6-8), and many (>9).  This data was then analyzed to show the likelihood that a students in each absence category demonstrated growth.

These methods, I assure you, lack perfect mathematical and scientific precision. I did not account for outliers, such as those students who missed a ridiculous number of days. My goal was to use data to form an impression, which might be why I made a C in Algebra II and Calculus.

Results
Students who made growth averaged 3.8 absences, while those who did not make growth averaged 4.6 absences.

Over 48% of students with minimal absences (0-2) demonstrated growth.
Nearly 40% of students with some absences (3-5) demonstrated growth.
Approximately 37% of students with several absences (6-8) demonstrated growth.
Approximately 34% of students with many absences (>9) demonstrated growth.

Inconclusive Conclusions
I have to admit that when I started compiling this data, I was hoping to share it with Mr. Medlin, so that he could stress to freshmen and to students with frequent absences just how absolutely and perfectly connected attendance and achievement were. I’m not sure that’s what the results show us.

Students who showed growth definitely averaged fewer absences, but it was less than a one day difference, not enough to make a significant impression.

The most significant statistic for me was that students who almost always attended class had an 8% advantage over any other group and were almost 50/50 for showing growth.

Nearly as telling a statistic is that only 6% separates the success rate of students with some absences from students with many absences. That “many” group includes kids with over 30 absences, some of whom actually made growth.

I hate—I mean I really hate—to admit it, but it just doesn’t appear that attendance was the deciding factor I imagined it would be. So what is?

The fact is, I don’t know. The teacher seems to play a much greater role in student success than absences do. Comparing growth percentage within each tested standard and honors course, the gap from the highest-performing teacher to the lowest ranges from a 13% to 45%. Those differences result from a wide range of factors, including teacher experience within a particular course. The fact remains—teachers make a difference in student learning. That’s good (and obvious) news. It means that what we do—how we teach, our instructional methods and management strategies—makes a difference.  It means that we can get better at teaching, even if problems beyond our control get worse. It means that we are not doomed—impacted, yes, but not doomed—by challenges like poor attendance.

I tend to agree with my father’s rule—attend class no matter what. For those students who don’t, all we can do is teach, teach, teach.

I’d love to hear your comments.