The final official words of the conference I just attended were
addressed by one of the organizers to the woman she had invited to run the
last session: "I knew you'd be good, but damn! I didn't know you'd be THAT
good!" I could say the same of the whole conference, myself. It was the
NSF's annual gathering of co-PI's and others heavily involved in the 72
Local Systemic Change projects all around the country. All of us spend a
great deal of time and energy working on professional development of K-12
mathematics and science teachers, which means in effect that this was a
professional development workshop for professional developers -- a pretty
scary thought. But the title, "What does the research tell us?" was
encouraging, and the pre-conference readings likewise, and sure enough,
the ratio of time spent in interesting and valuable ways to time spent in
screen-saver mode was remarkably high.
The conference started Thursday evening with a talk by Paul Black
entitled "Inside the Black Box: Raising Standards Through Classroom
Assessment." He led off with a joke so topical that I shall attempt to
reproduce it. The set-up is the currently much-recycled one of a
conversation between a person in a field (whom I shall call F) and a
person above the field in the basket of a hot air balloon (whom I shall
B: "Could you tell me where I am?"
F: "You are 27 yards directly above the center of my field."
B: "You must be a researcher."
B: "Your have made a precise and presumably accurate statement
which answers my question, but is of no use whatever to me."
F: "You must be a policy-maker."
F: "You don't know where you are, you don't know where you're
going, and you're trying to pin the blame on me!"
And with that, Black launched into a discussion of formative
assessment--that is, "all those activities undertaken by teachers and/or
their students which provide information to be used as feedback to modify
the teaching and learning activities in which they are engaged"-- based on
his survey of 680 research articles on the subject. He posed three
initial questions: 1) Is there evidence that improving formative
assessment raises standards of pupils' learning? 2) Is the practice of
teachers at present O.K.? 3) Is there evidence about how we can improve
formative assessment in the classroom? Then, lest the suspense prove
perilous to our health, he provided the answer sheet: 1) yes, 2) no,
3) yes, but..., the "but" having, predictably, to do with the fact that
it ain't easy. The examples he then gave provided a fascinating array of
forms and styles of assessment, especially student self-assessment, and
pretty convincing data about their effectiveness (from studies with solid
control groups, etc.) A particularly interesting aspect was that it is the
low achievers especially who benefit from ongoing formative assessment --
which makes sense when you think of the fact that they are probably the
ones least likely to be touch with their own level of understanding.
Several of Black's comments would have slotted so neatly into our
Brown Bag discussion of a couple of weeks ago that I had a lovely, if
fleeting, image of getting him here for another one. Unfortunately even my
bounding optimism doesn't quite run to importing a hot shot from King's
College, London for a no-budget seminar!
Friday morning's session was less of a novelty for me, because it
was given by our own Lillian McDermott, of whose program I have been in
awe for quite a while. She provided, as always, highly telling examples to
1) people teaching physics generally make totally false
assumptions about the level of understanding that has actually been
achieved by students who can successfully carry out required
2) even people who are aware of the existence of the gaps can't
necessarily predict where they are, because an expert cannot get inside
the mind of a novice (shades of Gail Burrill's comments at the AWM/MER
session in last week's newsletter.)
That, of course, is exactly what the Physics Education Group
devotes itself to finding out and acting upon, and further examples showed
us how effective this particular form of research can be.
The afternoon session again found me slightly in the position of
being a member of a choir being preached to, but with the choir member's
pleasure in being able to watch the congregation. Our speaker was Deborah
Ball, who is a pre-eminent researcher in mathematics education, and a
specialist in the study of the development of children's mathematical
thinking. She showed us a tape of a third grade class tackling the
problem: "Joshua ate 16 peas on Monday and 32 peas on Tuesday. How many
more peas did he eat on Tuesday than on Monday?" The first kid got the
correct answer in a normal way, by counting up the number line from 17 to
32. That's the first page of the transcript. The remaining seven pages are
a discussion by the rest of the class of other ways of doing the probleml,
and whether they are "the same", and whether in fact they give the same
answer. As always, I was stunned by the level of attention and focus of a
bunch of eight-year-olds engaged in a completely intellectual discourse.
But the major point here was to have us observe just how many and how
subtle were the mathematical ideas, correct or not, being produced by
these kids, and to reflect on the implications of that in terms of the
demands we are making when we ask elementary school teachers to teach that
Saturday morning's session is the one on which I can give the
least information. That's because I have an appallingly low threshold of
data overload, and several fuses blew a short way into the talk. The topic
was certainly interesting an highly pertinant: "Standards Based Reform: a
National Look", and I think a lot of the statistics will shortly be
available on the LSC web-page. But I can't report on them here--sorry
Saturday afternoon, with everybody's energy draining away and
everyone a bit anxious about their plane flights, especially given an
incipient blizzard, is when one might expect things to get a tad marginal.
Instead we had the most stimulating and provocative, not to mention the
most entertaining, session of the entire lot. Katherine Merseth, of the
graduate school of education of Harvard, and a specialist in instructing
by Case Methods, constructed for us a case study involving, appallingly
convincingly, all of the hazards we face in trying to carry out our
mission: the teacher who is trying hard but has been pushed to a desperate
level of non-confidence, the supportive administrator who suddenly does a
volte-face, the opposition of an organized community group (she named them
"Mathematical Accuracy Now!"--right!), et quite a number of ceteras. She
gave us the case study to read in preparation, and then, with humor and
vigor and magnificently incisive questions, she stirred up a discussion
that just kept on getting livelier and unearthing more issues, and could
clearly have done so for hours more. Net result was a conference closing
that was greeted not with "whew!" but with "well, darn!"--and what better
can you say?