Assessment

(originally written June 24, 2010)

How do we know that we know what we know?

If you’re a Senegalese shopkeeper eager to prove the freshness of your bread, you grab a loaf in your unwashed hand and give it a squeeze. “See?” the non-crumbling, slow-rising crust proclaims. “Not stale!”

“Yep,” I nod, exchanging the coin in my hand for the bread in the shopkeeper’s. “So I’ll taste…”

It’s assessment, folks. It all boils down to assessment. In this case, the proof was in the pudding (or, more precisely, the yeast). But behavioral assessment, as we saw with the bread’s impressive acrobatics, is less commonly used than paper-and-pencil quizzes. Normally, we just ask people what they know. In fact, I had asked the shopkeeper what he knew — I inquired whether the bread was from yesterday. A simple, “No, it’s good,” would have satisfied me. I would’ve taken his word for it. Getting up close and personal with my future sandwich was a test I didn’t need the shopkeeper to take. Ah, but therein we celebrate cultural difference. Not everybody’s so squeamish, nor prays to the gods of plastic wrap. And you know what? Between us? I ate the bread anyway. Gobbled it. Tasted just fine. (Maybe better! I could find out by sampling a non-squeezed and freshly-squeezed roll in a side-by-side taste test, but let’s keep our eyes on the prize, shall we?)

So usually, when it comes to assessment, we ask people what they know. Then we label it and measure it. Ah, but how do we measure it? We need some metric, right? We could compare ourselves against others. We usually do… which isn’t necessarily healthy. Nor is it necessarily fair, because we’re all little snowflakes in very special snowglobes. Who knows if someone’s snowglobe was recently rocked, or whether someone else’s snowglobe was made out of double-insulated glass? Is it fair to compare Hawaiian snowglobes and Arctic snowglobes? Does everybody get where this belabored metaphor is going?

It’s best to compare ourselves against ourselves. We’re our real competition. We’re our best yard stick. How have we grown? What do we know now that we didn’t know before? That speaks to meaningful change and, hopefully, to cast it in terms of science, significant change — because this PhD shebang isn’t just a neato thing to do on a free afternoon or 1,825… I’m gunning for big kid, philosophical status. That’s DR. Felt to you earthlings, thank you very much. This is science. I better hope it’s science, otherwise this intervention is just an exercise in well-intentioned-kumbaya-guitar-strumming — super-sweet but ain’t got no legs. With no idea what worked, why, or how, it’s impossible to extract the essential elements and work its magic elsewhere. In which case it’s “Good luck, ‘social problems,’ someone else will have to solve you! But if you want to send your kid to a really fun 6-week communication camp, come on down!”

Unh-uh. Not on my watch.

So, assessment, mes amis. Assessment. This should occur pre- and post-intervention, right, so we can quantify how our participants have changed. Good. But changed according to what? Yes. Knowledge, attitude, and practice, I was thinking. Great. In terms of what? Mhmm. So we drew up a list of objectives — things that, by the end of our journey together, we want our students to know, believe, and do. These are the things we’ll need to measure, so we’ll be able to tell whether we’ve achieved our objectives.

Famous! Splendid! So I wrote some questions pertaining to those objectives. But that’s not the end of the story.

Why? Because it wouldn’t make a very good blog post… Because some of those things don’t belong on a pre-test. I don’t think. Why? Well, the knowledge items are lesson-oriented. For example, by the end of the message development lesson, we want them to know the elements of an effective message. Super. Should that go on the general pre-test? Well, it could, but we have 12 lessons, you know, so that’d make for a really long pre-test. Also, some items need to be on the pretest, Day One, before we’ve sunk deep our benevolent claws and changed the state of our participant pool. So unless we just test the living daylights out of the kids on Day One, we’ve gotta save those specific, lesson-oriented questions for their own day.

Terrific.

Or maybe… we avoid asking the questions entirely. Ah ha. This is what I want to do in terms of measuring practice. We observe. (Observe how? Do we just watch, do we videotape?) We judge performance, let participants show us what they know and can do. (What is performance? Classroom behavior (not that they’re in school, per se), completed activities?) Hmm. And how do we assess this? How much do we pre-determine (etic, like checking off a checklist) and how much do we allow to emerge (emic, like just taking notes and seeing what’s there)? Exactly.

And who should do the judging? Us? Surely not me, the white girl from the States who’s in and out in 8 weeks flat and won’t even be here for most (all?) of the training? My Canadian camarade de chambre who will arrive Sunday? The teachers as they’re teaching? The other teachers while they aren’t teaching? Other staff members? What about the participants themselves? This is a program that prizes interaction, participation, self-expression, emancipation, defiant possession of one’s own learning. Kindred spirit program Global Kids (might I be so audacious as to claim this association? All hail, Global Kids!) utilizes alternative assessment models to empower youth-directed learning. Awesome. Since imitation is the sincerest form of flattery, I’m hoping to crib that from ‘em. Dig the badges.

But then, what about the participants playing a role in the research process as well? Oh yeah, right… That seems conceptually harmonious and, more importantly, moral. There happens to be a rich body of literature pertaining to youth as research participants. So… guess we should do that, somehow…

Meanwhile, we have to add in some contextual stuff — self-efficacy, the origin of all things, whose scale I lifted from a previous study; demographics, e.g., age, grade, parents’ professions; communication behaviors, e.g., access to devices and ways of using them. Questions pertaining to the latter two categories I appropriated from the Kaiser Family Foundation’s M2: Media in the Lives of 8- to 18-Year-Olds Report, which recently published its third wave of data.

And then there’s the SEL stuff — where participants are at in terms of their social-emotional health, what they know about the five SEL competencies (self-awareness, self-management, social awareness, relationship skills, and responsible decision-making), their attitudes in terms of the importance of these things, their practices. Good, wrote those. Do we want them to know the definitions or be able to identify the phenomena? Right. Identify. So make those questions “find the best example.” All righty.

Ditto the NML stuff — what they know about the 12 NML skills (play, performance, appropriation, multi-tasking, distributed cognition, collective intelligence, judgment, transmedia navigation, networking, negotiation, simulation, visualization), their attitudes, and practices.

Then there’s the stuff that I think, and research supports, is important too: intrinsic motivation, which is associated with possible selves, which can link up with resilience, that has implications for asset-based community development, which dovetails with positive deviance, which seems awfully similar to appreciative inquiry. Collectively, all of this argues for the necessity of requesting:

  • asset inventories;
  • community maps;
  • communication networks; &
  • learning ecologies.

So… that’s cool… to write… in French… and give to Senegalese youths to fill out… in French… when their native language is Wolof… and they’re burned out on school (which lets out July 2)… and they just wanted to learn how to use a camera… (is that true? what do they want to get out of the program? what did they think it’d be about? good questions…)

So I wrote it. The first draft. And now the team just has to sift through the pages of Q’s, and weigh each item’s importance, revise with respect to cultural appropriateness, slash and reconstruct in light  of grammatical atrocity, and come to some consensus. That’s what we’ve been doing (in between my last-minute dashes home to receive (or not) the Internet repairmen, who have finally deduced that my problem is due to my second-class, pre-paid service citizenship, and can only be fixed via upgrade (read: price-doubling), which I hope to suck up and purchase tomorrow morning, a 7h30). That’s what we’ll continue to do (quickly — but not too quickly — but quickly, because time’s a-tickin…).

But let’s step back and survey the big picture here: When all of this is said and done, will we know how the participants have changed? Yes, to that, I think, the answer is Yes. Good. But here comes the thornier question:

Will we truly know which theory, from this potpourri of Yes We Can scholarship, was the one that did the trick? How do we render this phenomenon of particularity — this summer assemblage of snowflakes from very special snowglobes — into transportable universality?

THAT’s what I really want — not for the sake of adding to theory, although that’d help a bookwormy brotha out, and I’d love to do him a solid. No. This isn’t a me-show (I proclaim, on my self-aggrrandizing blog…) It’s so we can say, “Here you go, ‘social problems,’ we’ve got a silver (or, okay, a little humility, bronze, or copper) bullet that we think’s gonna knock you out.”

I’m here to make the world a better place, people. I ain’t playin.

I just finished 20th grade. I’ve gone to school for YEARS in order to know so little. Ah, but maybe from knowing what you don’t know, you can begin to learn the all-important things you must?

As they say in Senegal, Insha’Allah.

Bookmark the permalink.

Leave a Reply