One of the longest parts of assignment marking (minus the actual sitting down and making notes on what’s good and what’s not) is the creation of feedback forms. In fact, it’s taking us so long to make them we’ve semi-automated it.
So, weekly assignments. These are units of work that some of our modules now make students submit on-line every week, get marked and returned to them. Ideally, It’d be better for everyone if the process was more automated (unit tests, code liniting and a check over by the GLA) rather than requiring manual marking but that’s a post for another time.
The process for this should be as follows:
- Download all student submissions
- Extract the zip file and sort the data into students
- copy the marking template for the student
- open the marking template and write the students regno and name
- Read the report
- open (and run) the submitted code, checking what was submitted
- fill in the feedback sections and generate the PDF
- Add up the marks and update the marking sheet
- repeat for 45 or so students
The problem here is that all that file management becomes time consuming, combine that with the mother of all zips you get out of our submission system and things start getting messy.
Students could forget to put their information on the sheet, requiring manual lookups of the information, the marker could mistype information and the marker has to retype the same advise over and over again.
Tidying up the submissions
A while ago I write a python script which took the zip file generated by the submission system and bucketed submissions by the student that submitted them. This helps the first step of the process quite nicely as we now don’t need to worry about missing a student’s work if they submitted multiple documents.
That takes care of our first problem, the student numbers are now known, they’re the folder names.
Generating the feedback sheets
But what about that whole file creation process, what we really need here is a list of student IDs to their associated marks for each section and possible comments. This is the kind of thing that databases would be really good for, but I’ve not really got time to make the all singing all dancing database and front-end to do that.
Time for the quick and dirty approach – CSV files and python scripts. The idea here is that we enter the student feedback and marks into a single spreadsheet that can then be used to generate the feedback sheets for the students and the summary sheets for the school office.
We could in theory use something like mail merge to do this, but we want some custom logic and error checking in there. We’d also quite like to be able to extend it in future if needed to do stats of submissions. We’d ideally like pretty PDFs generated from text input… that’s basically LaTeX’s job.
So we spent a few hours hacking together a script that took in a CSV file and a template, does string replacement on some template tags in the template, looks up the student’s name in a student list, does some sanity checks and then outputs the tex file.
Next we compile the tex file and save the PDF with the correct filename for submission to the students. The students get useful feedback, we have less error prone entry of data and can spend more time looking at the submissions rather than doing admin work.
The code (with no data) is on our git server if any of the other GLAs are interested or have feature requests. When I have more time I’ll tidy it up into something a bit cleaner.