The semester is nearly over at Southern Virginia University, where I teach, and that means that I must soon pass through that torment known as "grading."
I understand the purpose of grading; I also understand that grading has never accomplished that purpose, and is even farther now from doing so.
Supposedly, C is "average," with the other letters representing gradations of quality above and below that mythical creature. But that model is flawed from the outset.
For instance, I'm teaching a course in fiction writing. While there are specific techniques I teach, and I can certainly detect many flaws both major and minor, I also know that if you write a good enough story, the quality of the plain tale will trump any of those flaws. So do I grade the plain tale -- a brilliant one of which comes to writers only now and then, if ever, throughout a long career -- or the mastery of specific techniques?
And in calculating "average," what, exactly, am I averaging? The quality of stories, or the mastery of technique, comparing only the students in this course this semester? Since averaging is mathematical, do I slap grades on the stories and then average those?
How does that tell anybody anything? If this is an exceptionally good year, then a student who would have received an A in a different year might receive only a C this year -- based on the average performance of this class.
That is not only unfair, it's grossly misleading. No teacher does this, at least not with artistic works, because the grade should reflect the student's accomplishments compared to the average class, over time.
When you've been teaching the same course for thirty years, perhaps you'll have some idea of what "average" fiction writing is, for students taking that class at that stage in their career.
Except that no student is ever at the same stage as any other. So are my grades only designed to be a prize to those students who entered my class far advanced in their technique and tales, while the students who needed the class most are the ones doomed to get lower grades because they cannot, in a single semester, catch up to those shining stars?
So I toss in a serious fudge factor, called "progress." Now the whole idea of averaging gets tossed out the window, because in my fiction writing class you can get an A for writing brilliant stories -- or for working hard, thinking deeply, and making outstanding progress, even if no single story you submitted is actually of a quality deserving of an A.
Then there's the quality of a student writer's critiquing of other writers' work -- which is actually the best indicator I've found of their likely success in acquiring the skills to become a commercially successful storyteller.
So when another teacher -- perhaps someone evaluating a student for admission to a graduate school writing program -- sees the grade that student got in my class, what are they to suppose that my grade actually means?
If this isn't bad enough, let's add the single most important complication: Grade Inflation. All college professors know that "B" is the new "C." We give lip service to C being "average," but we also know that almost nobody is giving out grades that way.
There are many reasons for this. First, we professors really want our students to do well. After a semester of teaching them, watching their faces, listening to their questions and comments, most of us actually like our students. In some classes, in some cases, there are students we really come to love.
So ... what if one of my best students gets bogged down at the end of the semester and decides, correctly, that their grades in their major field matter more to getting into the right grad school or the right job than their grade in fiction-writing class? Assignments are late, their quality suffers from being rushed, or the student comes to workshop sessions having not read the stories we're discussing.
The professor, being a compassionate human being, is far more disposed to giving the benefit of the doubt to such hard-working, smart students, even if their work slacks off a bit toward the end of the semester.
Yet we also have a sense of fairness, and we know that even the students who don't show much evidence of hard work or deliberate practice don't deserve to have the playing field tilted against them. Why should they have to face the headwind, while the teacher shelters the favored ones?
The result is that leniency toward one soon becomes an easing of standards for all.
Every now and then, a dean or provost or department chair or principal will rail against grade inflation, demanding that everybody go back to giving Cs for average work.
But only new teachers fall for this, because those with any experience know that if they actually obey such instructions, their students will get lower grades than everybody else's, because nobody else is going to change a thing.
And then you'll have that awkward meeting with their boss, where the boss, looking at the semester's grades, sees that your students are "doing worse" than anybody else's, so ... what are you doing wrong as a teacher?
In vain do you say, "You said to get rid of grade inflation, and I did." Because the boss will reply, "So you're claiming that only you eliminated grade inflation, and everybody else scoffed at that requirement?"
Well, yes, that's exactly what I'm claiming, the novice teacher thinks but does not say. And next time there's a sermon decrying grade inflation, the not-so-novice-anymore teacher will keep a straight face, then return to the classroom and grade as if "average" was denoted by a B.
Because it is, and everybody knows it. Today, C is the new D. And D is the new F -- except that because course credit is awarded, the student usually doesn't have to repeat the class.
That's right. Grade inflation lets you promote students who, in a rational grading system, should not get course credit and should have to return and take the course again. Which both of you will hate.
Where in all this is there even the slightest incentive for teachers to grade on a strict C-as-average basis?
Oh, I'm aware that teachers who use multiple-choice, true-false, and fill-in-the-blank tests end up with numbers on every test. They can lay out those numbers and reach genuine mathematical medians, means, and averages. Or they can use raw percentages: 95% is A, 85% is B, 75% is C, and so on.
Then, however, we have to deal with the fact that very few teachers have any skill or training in the creation of tests.
I grew up in the home of a professor of education, and if there's one thing my dad taught me, whenever we discussed his work or my schoolwork, it was this: Multiple choice, true-false, and fill-in-the-blanks questions measure very nearly nothing. The teacher ends up with numbers, but those numbers say nothing useful, because the answers are reflective, not of student learning, but of teacher test-writing skills.
True, a slacker who never cracked a book or paid attention in class is going to fail those tests -- but that kid would fail any kind of test, good or bad.
And ... here's the nasty little secret ... a kid who has mastered the skill of test-taking will do very well on tests about which he knows nothing.
I submit myself as the poster child for this. My last math class was Geometry, and my last grade in that class was D, because it was pre-trig and I didn't care. I knew that math beyond geometry would never be a part of my life. Then I took the ACT (I was only applying to BYU, where my dad taught, so I didn't need to take the SAT).
My father had taught me how to take multiple-choice tests. Two simple rules:
1. Finish. Answer every question, whether you know what is right or not. Maybe you can outguess a particular question because some of the possible answers resemble each other, and one's a weird outlier. But it doesn't really matter. Random guessing vastly increases your odds of getting a better score than you would on questions you don't answer at all. (This holds true even with tests that subtract a quarter-point for wrong answers.)
2. Never change an answer that you've already marked. Unless you really do remember some fact that you forgot on the first pass, so that the correct answer is now crystal clear -- which almost never happens -- your first guess is far likelier to be right than your second guess.
My dad showed me multiple-choice answer sheets on the tests I helped him mark, on the dining room table in the evening. He said, "Look at all the answer sheets with erasures, where the students changed their answers. I challenge you to find any that went from a wrong answer to the right one."
In dozens of tests, there were many examples of changing from one wrong answer to another, and almost as many of changing from the right answer to a wrong one. But there was not one case in which the student changed from a wrong answer to the right one.
When you're guessing anyway, just admit that you don't know and move on.
So these two principles were in my head when I took the ACT in 1968. Nobody was surprised that I got the top score possible in English or reading or whatever it was called. But everybody, me especially, was shocked that on the math section, I scored in the 99.3rd percentile.
I got a better score on math than most of the students in my school who actually knew anything about the subject.
Why did I do so well? Truly, I had never even seen many of the symbols and functions in the final three-fourths of the test. Now I know, vaguely, what some of them mean, but I was guessing about answers where I had no idea what kind of operation was taking place.
But I answered every question. The better-educated students spent much of their time working out the math to make sure they got the right answer -- and then left half the questions unanswered when time ran out. So I had a one-in-four chance of guessing the right answer to those questions, while they had no chance at all because they had never gotten to them.
The more numeric the test scores are in a class, the less the teacher is able to determine which students know what they're doing, and which are simply good at test-taking.
As my father taught me, the only tests worth giving are pure essay tests. (The equivalent in mathematical courses is to have a test consisting only of problems, with no selection from a list of possible answers.)
Essay tests are seriously flawed, though, in this way: Writing clear sentences depends on first thinking clear thoughts, which is not likely to happen if you are phobic about test-taking. I've seen brilliant students -- including some who are exceptionally good writers -- completely fall apart on essay tests because they are incapable of rational thought under that kind of stress.
But genuinely test-phobic students are pretty rare. In most cases, essay tests fairly represent the quality of the students' thinking and the depth and breadth of their knowledge and analysis.
In fact, another thing my dad taught me is that if you have a well-written final exam, you can pass it out on the first day of class, so that students can experience the whole course with a sharp awareness of what is going to be pertinent on that final exam.
And unless it's a course in memory enhancement, a good essay test is best conducted with books open. Well-written essay questions cannot be answered by quoting directly from the textbook; they depend entirely on the student's analysis, ideas, and understanding.
Therefore, why not have all their books available to them during the test, so they don't falter because they can't remember, at that moment, the name of a particular character or the date of a particular historical event? What the teacher should care about is whether they understand the meanings and causes of things.
In my course in the fiction of Tolkien and Lewis -- the only class I taught this semester that could possibly have a final exam --I didn't pass out the final on the first day of class, because I was taking some new angles and had to rethink and rewrite the final I had used the last time I taught.
But the students got the final several weeks ago. I told them that of the questions on the test they were looking at, they would be required to answer only two -- one that I chose, and then any other question that they chose.
I told them that they could bring their copies of the books to the final and refer to them at will. They should give some thought to all the questions, because they didn't know which one I'd choose. And it would be sensible, I told them, to plan out a clear, thorough answer to two of the questions. That way, in case I happened to choose one of them as the required essay, they'd still have another well-prepared answer for the essay of their own choosing.
The only requirement I gave them was that they not actually write the answers in advance and bring them to class. Locking in their answers early might eliminate the possibility of thinking of something new during the process of taking the test -- and I know from experience that some of the best ideas come up in the process of writing.
But now we get back to grading. A good essay test will yield student answers that are not what the teacher expected. Some of the teacher's ideas will be challenged. And even when the student has overlooked some important aspect of the subject, an open-minded teacher has to understand what the student did with the ideas he or she thought of.
In other words, just as with grading fiction or term papers, the process of grading essay-only final exams takes forever.
And most of the time, the teachers have to turn in their grades a ludicrously short time after the final exams.
I have a hard time meeting that deadline, and I teach only one course that has any kind of final exam. Imagine the professor with four courses that have essay finals. Or the schoolteacher that teaches six classes that must receive final grades right after the final exam.
No wonder so many resort to multiple-choice and true-false tests that can be graded by a machine -- or by a teacher who is in a mental stupor after hours and hours of grading. Your nine-year-old kid can score a multiple-choice test by comparing student answer sheets with the key sheet. You can get help.
With essay tests, the teacher can't delegate the grading, period. If somebody else can grade your students' essays, then you didn't write a good test. Somebody who wasn't in the room during lectures and class discussions, who hasn't already read all the students' papers and previous tests, isn't capable of understanding the context in which the test essays were written.
It's as useless as grading the hideous five-paragraph essays that my kids had to write to get through school in Greensboro. I hope that you all understand that there is never a situation in real life where the five-paragraph essay formula will be useful.
Our children learn more about good writing from conversing with their friends by phone-text and email than they will ever learn by preparing for a test based on the five-paragraph essay. Because when kids send each other texts, it matters whether they wrote clearly and accurately, with some real thinking about the consequences of their words.
And nothing about the five-paragraph essay will ever matter to anybody.
I know my students are more concerned about not letting me down than about the actual letter grade they get. They know I have high expectations. They want to meet those expectations.
But this goes both ways. Because, like parents giving Christmas gifts to their children, we teachers dread the idea of disappointing our students by giving them lower grades than they hoped for.
The irony is that I have agonized over giving a student a B-minus, because despite the fact that he or she worked very hard, comprehension and mastery still remained elusive. Later, that very student thanks me profusely for that B-minus, because it was the highest grade they got that semester. "It saved my GPA so I kept my scholarship," I've been told.
What can I say? I regarded a B in any course as a shattering failure, when I was in school. Not every student feels that way.
Do grades matter? Well, yes, for a very brief time. High school grades matter when you're applying to colleges -- but a few imperfections in your GPA might save you from wasting your time and your parents' money by getting into a ridiculously expensive high-reputation college where you will never have a class from the top professors, who are only there to do research and work with graduate students.
Instead, you'll be taught by grad students and adjunct professors, who are not likely to be better teachers than those you'd get in the local community college. So why not take your imperfect GPA and go to the local state school, where the professors not only have more time for you, but also care about teaching all their students, and not just a chosen few?
Your parents think they want you to go to Harvard or Stanford or Duke, either because they did, or because they didn't. It has nothing to do with you or your life. If the school you end up attending has a good library and a true university culture -- meaning that people have open minds and don't punish you for speaking a nonstandard idea -- you'll be way better off than if you attended, say, Berkeley or Yale.
Once you've passed from high school to college, grades begin to matter less and less. Sure, graduate schools care about your undergraduate performance as measured by grades. But once you get out into the real world, as long as you have mastered the core of knowledge required by your job, you will be judged only on your job performance and your character.
Nobody in the real world ever looks at your grades or test scores. They don't matter anymore, unless your ego is so sadly in need of stroking that you bring them up. (Yes, I saw how I brought up my own grades several times in this essay. I rest my case.)
This week, I'm giving my Tolkien-Lewis students a demanding final exam. I'm treating the final exam period of my fiction-writing and hymn-writing classes as a last workshop session. Then I have a few days in which to read or refresh my memory of everything they've written throughout the semester, while keeping in mind the quality of their class participation.
My grade is the last gift -- or rebuke -- that I can give them. Fortunately, I know these kids by now, and I know that they are all capable of acing this final and walking away with an A or A-minus in the class.
The only way I'll have to disappoint any of them is if they disappoint me first.
Trying to write with Microsoft Word is like wiping a baby's bottom with sandpaper. Yes, it'll do the job, but nobody enjoys the screaming.
In an article in PC Magazine (10 Apr 2017), Ben Bajarin writes about three myths about the way young people (18-24) are using computer technology today:
Myth #1. Millennials are done with Facebook.
Not true. Facebook is still king. Almost 90% of the millennials report that they use Facebook daily. It's still the top app, with Snapchat and Instagram well behind.
In fact, Snapchat and Instagram are used for different purposes, so they aren't actually competing with Facebook, which is, at core, a way of keeping in touch, while the others are sought out more for entertainment.
Myth #2. Millennials don't use personal computers.
You've heard, I'm sure, that it's all tablets and smartphones now, and young people don't care about laptops and desktops.
That's like saying that because you have a rice cooker and a microwave, you don't need an oven or a stove anymore. Um, what about Thanksgiving?
It's true that young people -- children especially -- spend hours glued to "screens" that they carry around with them. Car travel has become vastly more pleasant to adults because the kids have so many virtual worlds in their sweaty little hands.
And when I watch young adults texting furiously as they walk along, never noticing that if I hadn't seen them and stopped, they would have died under the wheels of my car before they got to press send, I can easily believe that these portable screens are somewhere between breathing and peeing as a necessity of life.
Yet the 18-24-year-olds in the survey reported that when it comes to doing actual work, they rely on laptop or desktop computers as their preferred platform.
One reason should be obvious to anyone who has tried to write anything long using the virtual keyboard on a smartphone or tablet. Even though millennials are way faster on that keyboard then old coots like me, they are also keenly aware of how many typos and false auto-corrects there are. Even two-finger typists can usually go much faster and type more accurately with a real keyboard.
I daresay that all my students at SVU, who are fully equipped with smartphones, write all their papers and stories on actual computers. And they wouldn't have it any other way.
Just because you love to ride your motorcycle doesn't mean you won't prefer to be in a car on a stormy day.
Myth #3. Millennials hate face-to-face meetings.
Hey, Facetime and Skype are a wonderful way for us to talk to (and listen to, and see) our grandchildren, because their parents thoughtlessly chose to live near their places of employment on the west coast. But those minutes online don't begin to compare to having them climb on our furniture, chatter away in the car, or thumpety-thump down the hall on their short little legs.
Guess what -- it's not just old people who feel that way. Young people, too, report that "Collaborating through things like Google Docs, or a messaging client like iMessage were sufficient to keep making progress. However, when it mattered at critical stages, nothing replaces a good old fashioned meeting."
When your field of vision is limited to a computer screen, it's almost impossible to get the kind of feedback from others that our community-oriented brains routinely process in face-to-face conversations and meetings. Screens are a necessity when distance prevents a real meeting -- but young people know that face to face is better. Period.
In fact, the only time screens are better than face-to-face is when you're firing somebody that you're afraid might get violent.
If there are no babies born in heaven, how will there be joy?