Considering my previous posting of July 3, “Consider Using [Albeit Silly] Pop Culture to Illustrate Computer Science”, you might expect I’d like Yoav Yair’s Distance Learning column, “Did You Let a Robot Check My Homework?” in the June 2014 issue of ACM Inroads (pages 33-35), considering the following material in it: “… something completely different (to borrow the Monty Python phrase).”
What I like more substantively about that article is its indication that robots (OK, software) could do grading instead of instructors really needing to do it. Hear, hear! I really dislike grading. After graduating from college, I hung around my alma mater working as a computer technician and a course assistant, and once when some professors interviewed me for a position as a teaching assistant, one of those typical interview questions they tossed at me was what I most disliked about such a job; without any hesitation, I declared, “Grading!” It’s tedious and mind-numbing and pretty much a downer, ‘sniffing’ through people’s hard work to try to find things they may have done wrong. Since I became an instructor rather than a course assistant, I’ve been happy to offload the task of grading onto (advanced) students who work as my course assistants, so I don’t have to do that onerous task of grading myself!
Though one bit of skepticism I have about the idea of automated systems doing grading (instead of instructors) is whether that scheme may scale down: the scenario that the article discusses involves thousands or millions of assignments; if you have only thirty or so students, then could the overhead required to arrange to automate the grading make the scheme actually not worthwhile? (Rem. there’s an analogous situation regarding whether ‘good’ sorting algorithms such as mergesort scale down.)
And then, I want to note some positive aspects of one particular occasion when I worked as a grader. Are you familiar with the name Jeffrey Ullman? He’s been doing various things recently, but I’d classify him with Donald Knuth and others as one of the people responsible for a fair amount of classic (or fundamental?) Computer Science, e.g. ‘the dragon book’, Compilers: Principles, Techniques, and Tools (2nd edition, 2007). I once worked for him as a teaching assistant, and while all the classic work he did of course appeared impressive to me, the way he actually impressed me the most was that he assigned students work to do which definitely required them to demonstrate knowledge of the course material, yet the work was also easy to grade! For example, the correct order of a graph’s nodes for some shortest-path algorithm or something needed to be say 1,3,4,2,5, and students really had to know the material to get that correct answer, and it was also extremely easy for me to check whether they had that correct answer or not. Ever since that experience, I’ve aspired to design assignments like that.
Then, recently, I’ve been involved in another situation where the way students’ work is assessed is notable: The ACM International Collegiate Programming Contest. I was rather ignorant about it until 2003, and then I started taking my university’s students to it each year, and since 2010 I’ve been serving as a Site Director. In my region (East Central North America), the way assessment goes is as follows: if a program submitted by a team of students successfully compiles and runs on some test data, producing correct output — kind of like with “1,3,4,2,5″ above — then they get credit for it; otherwise, they don’t. And to avoid the possibility that students might ‘can’ the desired output, not all the test data is revealed to them during the contest. Thinking about it, this is another situation like my work with Jeffrey Ullman where grading has been made relatively easy.
Now as I’ve said, because of my experiences I aspire to arrange effective but also efficient assessments. Though like Blaise Pascal (though some think it was Mark Twain) said about writing a letter shorter, I don’t always have time to make assignments that way: it’s easier to just assign anything, e.g. textbook exercises. One thing I do is very similar to the programming contest and/or unit testing: I ask students to run their programs on some specific inputs and submit records of these demonstrations; then when I grade, I look at these demonstrations actually before the programs, checking for what I know to be the correct results — e.g. maybe like above, “1,3,4,2,5″.
With that, I can cover two of the “three key conditions that enable students to benefit from feedback on their academic tasks” which Yair discusses: [1] understanding what ‘good performance’ is (the demonstrations need to work), and [2] understanding what their own current performance is (the demonstrations did or didn’t work). Then, much of grading may reduce to verification. Except, there’s the third condition for grading to be beneficial: [3] “identify means to close the gap between [...] good performance [and] their own current performance”. This is hard. In fact, this condition may actually be fundamental teaching.
I’m going to leave this commentary at that, for now. But what do you think? How do you feel about grading, and would you say that you use any ‘tricks’ to manage it?
Regards,
Hugh
P.S. While writing this I actually did a Web search on Jeffrey Ullman, and one document said, “He is currently the CEO of Gradiance.” This Fall I’m teaching “Compiler Design and Construction”, and I’m using the dragon book, and the back of it says, “Gradiance is a Web-based homework and lab assessment resource for students and instructors. For Compilers it offers a collection of homework sets…. For more information about Gradiance, please visit aw.com/gradiance.” Hmm, I think I should check this out! But doing so takes time…. (;-)