Our math department offers a class that our principal thinks is unnecessary. I truly don't understand his logic, which seems (to me) to be that students' doing well in this class is evidence that the class is unnecessary. If I'm misstating his position, it's only because he hasn't communicated it in a way that I can understand. To be clear, I think he's doing a fine job as principal and in general I support what he does, but on this one topic I just do not understand where he's coming from.
So I'm going to do a little statistical analysis. The course is a "prep for the follow-on course", so I've asked the district for the following data:
-Students who took both this course and the follow-on course at our school in the last three years
-Grades for those students in the follow-on course
-Students who took only the follow-on course at our school in the last three years
-Grades for those students
My goal is to be see if the prep course students do at least as well in
the follow-on course as do students who did not need the extra
preparation. It seems to me a chi-square test would be appropriate, looking at the proportions of students who "succeeded" in the follow-on course (grade of A or B) and whether or not they took the "prep" course. The true measure would be to compare the grades students would have gotten in the follow-on course had they not taken the prep course, but obviously that information is unavailable to us mere mortals. Any thoughts on the validity of the analysis as described above?
There's nothing wrong with doing that, but then I thought about the following: what if we were to do an analysis of standardized test results, by teacher, for different courses? Oh, I wouldn't need to know the teachers' names; the data could be given to me as Teacher A, Teacher B, etc. Would an ANCOVA analysis (using last year's test score as a covariate) be mathematically appropriate? And would it be "appropriate", for lack of a better term, to compare, say, all the geometry teachers?
I was thinking about this because we have some teachers who are all into Common Core, into group work, into all that stuff that I'm not; I'm a direct instruction kinda guy. Currently, though, my students' results can't be compared to anyone else's, as I'm our school's only statistics teacher and the other class I teach doesn't have a specific standardized test associated with it. We do, however, have several geometry teachers, and they use varying methods to teach the subject matter. My thought was that we could use standardized test scores as a proxy for the teaching methods and analyze to see which results are better. We have lots of geometry classes.
If I could see some evidence that so-called discovery learning and group work, which we're being pushed by the district to implement, is valid, I'd give it a shot, but until then I'm going to stick with direct instruction.
In any other field, a valid statistical test would be considered not only reasonable but obvious; in the unionized, we-are-all-one world of education, though, even the mere suggestion that some teachers (or their methods) are not as effective as others could ignite a firestorm.
What do you think?
9 comments:
I think your idea of statistical analysis would be the best indicator. But as you also mention, any thing that might paint teachers in a negative light is never embraced by the union. I teach science, and I also like direct instruction for most topics. They can't just do a lab and "figure it out" everytime. Now, might I use a lab to introduce a topic, and cause them to make observations, so that when I DO give the direct instruction they have a reference point? That sounds good! My problem is I have such large classes (similar to your 33/36 class caps), I don't have the lab space or materials to allow them to work in pairs. They have to work in groups of 4 or 5 and there are always students who sit and do nothing and then just copy their partners data. But I digress...
I am hoping that the State does away with CST's next year so I can try some of the "inquiry based instruction" stuff and not be concerned with the scores dropping precipitously. I would love to get back to teaching a true College Prep course.
You need to control for self-selection of the students that do/don't take the "prep for the follow-on course."
Ideally, you'd be able to assign students at random to the prep course or not, but you can't do this so you need to account for it.
To illustrate: The weaker students might realize that they *ARE* weaker and take the prep course. Then they might score the same as the kids who didn't take the prep course. So the prep course is useless, right? Well, no ... those kids would have scored EVEN WORSE without the prep course.
Alternately, the stronger students might take the prep course to be VERY SURE that they do well in the real course. When they then DO perform well, you can't be sure that the prep course is/was responsible. Those students probably would have done well anyway.
So ... you need to account for this. One way *might* be to look at the grades of the kids in the class *BEFORE* the prep course.
So ... as an example. Assume that these are math courses and the sequence is something like: (a) Trigonometry, (b) Pre-Calculus [the prep course] and (c) Calculus.
You want to pair up the kids who have As in Trig who did/didn't take Pre-Calc. Same thing for the kids who had Bs in Trig.
This *still* isn't perfect, but it is better than the simpler analysis you suggested :-)
-Mark Roulo
The prep course we have isn't on CSU/UC's list of A-G approved classes, so students who do will generally avoid this class like the plague, so there's less of an "I'll take this class to ensure I do well" problem than you might think.
Looks like you're suggesting a matched pair or blocking scheme, not sure how I'd do that in this case but it's certainly worth considering.
But . . .you can't get an unbiased sample ...especially at the school where you teach. Students are in no way randomized, and parents would throw fits.
Any way to administer a set of three tests - one to the students taking the prep class during the first session of the prep class; one to all of the students taking the stats class, at the beginning of that course to measure preparedness, and then one upon exiting the stats class?
BTW, I have tutored students who hated the self-discovery and group work approach. From my observation, the ones who "get" the work carry the load and resent it, and the ones who don't end up feeling worse because they know they are not contributing. Or, they don't give a hoot to begin with, and letting someone else shoulder the work suits them just fine - but they don't learn anything. I think that most students DO want to perform well, and if a student is struggling it is more respectful to him to let him work as an individual (and maybe provide help after class), rather than putting him in a situation where his deficits are on display to his classmates. Just my two cents.
Parents don't need to know what I'm doing, and students are randomly assigned to classes. It's not perfect, but in the real world we work with what we've got. The results merely have somewhat more imprecision, which would be taken into account.
"Looks like you're suggesting a matched pair or blocking scheme..."
(trots off to Wikipedia ... reads for a while ...)
Yes!
(I have no formal statistics background, so the terminology is often a mystery)
In fact, if you *don't* do something like this (or account for the differences in the two student populations some other way) I'm not sure how valid your results can be. There is probably *something* different about the kids who choose to take the prep class (they know they are weak in this area, they *care* more, they are all football players and this fits their schedule best, ...).
One other possibility (which requires cooperation from other teachers, so is probably a no-go) would be to have one or two sub-topics from the non-prep course *not* be taught in the prep course. If you could get a sense for how the kids in the prep course did on the taught vs. non-taught sub-topics, that should also help to control for self-selection.
Another possibility would be if you could find kids who *wanted* to take the prep course, but didn't/couldn't for some reason. Then you could track the kids in the prep course vs these kids.
This sounds like fun if you can pull it off!
-Mark Roulo
Darren...students are allegedly and MOSTLY randomly assigned to classes. But I remember days when I would be over my limit, whereas a certain other teacher with some of the same classes as me would be 40 students under limit. That CANNOT be random. And parents with pull can select teachers . .. and teachers with pull can select students. Happens all the time. I missed the part about group/discovery learning -- that's hilarious, as when I was hired, that was what I got to teach (remember CPM?) and the district scrapped it because it was completely ineffective and consistently produced lower results...
Post a Comment