The Thorny Issue of the Research Interview: What did they actually mean?

 

For any researcher grappling with the complexities of a qualitative study, the process of generating and analysing interview data is a thorny one. Most of us are not discourse or language specialists, yet we are completely dependent on the vehicle of language to receive, process and communicate our work. My recently completed EdD thesis addressed a number of important considerations that are imperative for us to address. Here is just one.

Given that the purpose of an interview is to generate data focused on the respondent, the biggest question must surely be; “What did they actually say?”

Let’s pause and unpack that for a moment. What do we mean by ‘the respondent said’?

 Do we mean the literal words used? To what extent are we assuming alignment between the researcher and respondent in the definitions and meaning attributed to each word? Would they, or we, use the same word to convey the same meaning with all audiences and in all contexts? Possibly not – so how do we know which meaning is being attributed in the interview with us?

Do we mean just the words that were spoken? What about the role of silence; short gaps, reflective pauses, pained pauses (the role of silence is fascinating!). What assumptions do we make about longer pauses – is the respondent buying thinking time or stuck for an answer? How does that affect how we respond – e.g. we smile patiently or move the conversation on – how is our response to the silence shaping the data which comes after it?

Do we mean just verbal transactions? In which case what about the arm gestures, winks, smiles, pointed fingers, folded arms? Do those non-verbal cues align or contradict with the words that are being spoken?

What about utterances? We all use features such as ‘um’ and ‘you know’ to subconsciously buy thinking time or to infer audience agreement for example. How do those utterances shape our perception of the respondent during data generation, affect our follow up questions and fix our lens for when we later analyse the interview data?

Furthermore, we can bound our data and analysis by date, time, location, audience and context. But to what extent should we also bound by mood, weather, temperature, and the myriad of other influencing factors that we know affect human behaviours and thus what is said?

Which version of ‘what they said’ are we using as the basis for our analysis?

As researchers focused on a particular subject or topic, we are at risk of treating the transcript as a vehicle of single truth when conveying insight into our given area of study. We must be mindful of this – drawing on the ways that discourse analysis can shine a light on otherwise unseen elements of the data. We must, as Cruickshank (2012), argues, not allow the transcript to become the focus rather than the subject matter itself.

Dr Fiona Aubrey-Smith, Director, One Life Learning Strategic Education Consultancy and Associate Lecturer (E313 and EE831), The Open University

EdD Thesis “An exploration of the relationship between teachers’ pedagogical stance and the use of ICT in their classroom practice.” Publications and Presentations at www.onelifelearning.co.uk

f.s.aubrey-smith@open.ac.uk

Twitter: @FionaAS

Leave a Reply

Your email address will not be published. Required fields are marked *