What gets published and what people read

I doubt this will be my final ‘rant of the day’ for all time, but it will exhaust the stock of things I’m itching to say at the current time. This one relates not to the use and misuse of statistics but rather to rituals surrounding the publication of papers. I’ll stay clear of the debate surrounding open access; this is more about what gets published and what doesn’t!

I have tried to get papers published in Assessment & Evaluation in Higher Education (AEHE). They SAY they are “an established international peer-reviewed journal which publishes papers and reports on all aspects of assessment and evaluation within higher education. Its purpose is to advance understanding of assessment and evaluation practices and processes, particularly the contribution that these make to student learning and to course, staff and institutional development”  but my data-rich submission (later published in amended form as Jordan (2012) “Student engagement with assessment and feedback: Some lessons from short-answer free-text e-assessment questions”) didn’t even get past the editor. Ah well, whilst I think it was technically in scope, I have to admit that it was quite different from most of what is published in AEHE. I should have been more careful.

My gripe on this is two-fold: firstly if you happen to research student engagement with e-assessment, as I do, you’re left with precious few places to publish. Computers & Education, the British Journal of Education Technology and Open Learning have come to my rescue, but I’d prefer to be publishing in an assessment journal, and my research has implications that go way beyond e-assessment (and before anyone mentions if, whilst I read the CAA Conference papers, I’m not convinced that many others do, and the International Journal of e-Assessment (IJEA) seems to have folded. Secondly, whilst I read AEHE regularly, and think that there is so excellent stuff there, I also think there are too many unsubstantiated opinion pieces and (even more irritating) so-called research papers that draw wide-ranging conclusions from, for example, self-reported behaviour of small numbers of students. OK, sour grapes over.

In drafting my covering paper for my PhD by publication, one of the things I’ve done is look at the things that are said by the people who cite my publications. Thank you, lovely people, for your kind words. But I have been amused by the number of people who have cited my papers for things that weren’t really the primary focus of that paper. In particular, Jordan & Mitchell (2009) was primarily about the introduction of short-answer free-text questions, using Intelligent Assessment Technologies (IAT) answer matching. But lots of people cite this paper for its reference to OpenMark’s use of feedback. Ross, Jordan & Butcher (2006) or Jordan (2011) say a lot more about our use of feedback. Meanwhile, whilst Butcher & Jordan (2010) was primarily about the introduction of pattern matching software for answer matching (instead of software that uses the NLP techniques of information extraction), you’ve guessed it, lots of people cite Butcher & Jordan (2010) rather than Jordan & Mitchell (2009) when talking about the use of NLP and information extraction.

Again, in a sense this is my own fault. In particular, I’ve realised that the abstract of Jordan & Mitchell (2009) says a lot about our use of feedback and I’m guessing that people are drawn by that. They may or may not be reading the paper in great depth before citing it. Fair enough.

However I am learning that publishing papers is a game with unwritten rules that I’m only just beginning to understand. I always knew I was a late developer.

This entry was posted in publication and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *