Re: Not sure
The article is not so much about 'automated marking is always better for every purpose', but a much more specific one about whether, for the purposes of marking the writing component of the NAPLAN test, an automated system could do at least as well as human marking.
I think much of the intent of the NAPLAN marking automation was directed at optimising the logistics of the process. The time taken to manually mark was of the order of 6 weeks, if memory serves, and there was considerable pressure to reduce the time between sitting the tests (April, I think) and final results being published (September?). Despite hiring experienced people (usually former teachers and occasionally, principals) and conducting training beforehand, it was still necessary to remove markers who could not mark consistently (i.e. against pre-marked test scripts randomly inserted into the system). So if a piece of software can do the job consistently and at least as well as experienced markers, it's a simple choice.
Speed of marking obviously goes up as well as the consistency. Handwriting difficulties are removed (in the fully automated process, the students type their answers, rather than hand-writing them and then having them scanned, as at present). A considerable amount of garbage-removal is obviated because there are reduced opportunities for mis-identifying scripts - it was a constant bugbear that school staff would cross out the pre-printed name on a test book, hand-write a different student's name on the cover and then give it to the new student (usually done when the original student was absent and they'd run out of test books). They would leave the barcode that identified the original student and the test score would get automatically assigned to the original student. The list of things like this that have to be hunted down and cleaned up is lengthy. Eliminating the scope for stuff like that improves both timeliness and accuracy. (Any process that involves attempting to obtain the coordinated action of more than a handful of teachers and principals is like trying to herd cats and is best avoided.)
The wider issue of 'can automated marking get at the true subtleties and creative abilities of humans?' is a complex argument, but, for every example of where software falls short, you can point to equally many where highly experienced and knowledgeable humans do too. The Leavisite 'wars' are a good example of where it can go wrong in a particularly destructive way. But on a more practical level, where a student has been given mark that a school or parent believes is anomalous, a re-mark can always be requested. Re-marks do occasionally happen under the present system, and almost certainly would under a fully automated one too.