Well, darn! There went another highly promising opportunity to feel all virtuous and smug. I spent last week-end at a conference on evaluating Local Systemic Change grants from the NSF. Given that evaluation even at classroom level makes me uneasy, and at the level of an NSF project downright panicky, attending the conference had a lot to do with pressure from my conscience to write something on the nearly pristine mental page reserved for such information. My image was of myself sitting quietly in a corner scrabbling for some comprehension, which I would spend the years of our grant desperately trying to make use of. Instead of which I found myself totally involved in an absolutely fascinating effort by a lot of interesting people to design carefully an assessment tool which, as everyone is highly conscious, is going to have an impact on the shape of the whole collection of projects.
Lets see how much I can convey. Our project is in the third cohort of Local Systemic Change grants, but the first to concentrate highly on Mathematics--all of the first and most of the second focused on Science education (Seattle has a Science project in each of the those cohorts.) The idea of the grants is exactly described in the title: to bring about change in education not simply in a collection of classrooms, but in the entire system of education in the area covered. Not just random change, but change in the direction of increased student engagement and deepened conceptual grasp. Clearly desirable--maybe essential is the better word--but how under the sun do you create a system for measuring it at all, much less measuring it in a uniform way across thirty-odd projects? We watched snippets of videotaped classes and scored them independently, then discussed our scoring (which, not too surprisingly, resulted in an almost invariable convergence to the mean). Along the way we discovered that while math and science start and end in the same place, they diverge considerably in the middle--in case any added complications are needed.
In the end, we changed a few bits of wording, and possibly the subdivisions of the Summary Evaluation. Actually, that conversation was one of the most interesting. The bottom two categories had been "passive learning" and "activity for activity's sake" and we decided that they both belonged together in a single lowest slot. At that point Spud Bradley from the NSF pointed out that the issue is actually passive non-learning, i.e., students sitting like blobs while information pours past them. Otherwise there is no distinction from the state described as one of the options in the top categories, where the students are listening to the teacher and what the teacher is saying has them clearly engaged and thoughtful. I hope they do change that piece of terminology.
In short, I learned a lot, and I had a great time. One of the things I most emphatically learned is that I am profoundly grateful not to be Horizon Research, Inc. They're the ones who have been hired on to undertake the overall evaluation of the total collection of Local Systemic Change grants. What a job!