Why is theory such a big deal in postgraduate research?

I am working with a new student. Long story short, I am not his first supervisor, and this his not his first attempt at his PG research project. He’s had a tough time thus far: significantly with theory as his first supervisor did not seem to feel he needed any. Quite understandably, then, one of his first questions to me was ‘why are we making such a big deal about theory [when my research is narrative]?’ In answering this question, I have been pondering a bit more about why theory is such a big deal in research, especially at PG level.

The best way to begin is with an overview of what postgraduate research (any academic research perhaps) is for: to make a novel, valuable and needed contribution to knowledge in your field or study and/or practice. Often, particularly in the social sciences, we are taking a known problem and trying to solve it with a new approach, or we are critiquing the work of others from a particular perspective to extend knowledge further, or we are introducing a new problem, solvable with established approaches in ways that extend or consolidate knowledge and practice. To achieve this contribution to knowledge, we focus on small slice of the known world – our data – and we analyse this in ways that connect our findings to broader understandings/knowledge/phenomena so that what we are contributing clearly fits within the bigger picture in our field. 

If this, then, is basically why we do research, then how do we actually achieve this goal of saying something new and fitting the new into the established knowledge in our field? This is, in many instances, where theory really does its best work.

leaving star trek GIF-downsized_large

When we do academic research, any research, we are trying to find an answer to a question that needs one. We start with a research problem, and we read around that, becoming increasingly focused until we have read enough to locate a gap in the field that we can contribute to filling with our research. We then narrow down a research question, the answers to which will fill (part of) this gap. At this point, we have a sense of what data we are going to generate and how (research design and method) and we may even (from reading) have a basic sense of what we may find. But, what we need is a framework within with to understand what we may find, and tools to use to make meaning from this data. We need to ensure that we move beyond purely descriptive meanings, even in descriptive studies. If all we are doing is describing or narrating our small slice of the world, it may be interesting, but perhaps only to a tiny group of potential readers who understand the specifics well enough to extract meanings of their own. This falls short of the kinds of contribution to knowledge expected of postgraduate scholars and publishing academics.

The potentially frustrating and difficult issue of finding the right framework for your research is that you can’t really ‘find’ one and just put it into your project, where it will do its own thing. Doing this would be akin to writing a ‘theory’ chapter or section, and then doing nothing with that theory in the analysis to connect your study to the field. Rather, you have to build and use your theoretical framework to make sense of your study, and its contribution to the field. This means you need to find theory that fits with your research problem and questions, that can help you understand this problem in helpful ways. Then, you need to select the relevant parts of the whole theory (you don’t necessarily, for example, need to include everything Pierre Bourdieu ever wrote in your thesis if all you really need to focus on is the interplay between capital and habitus in the structuring of a field). This selected theory then needs to be explained, exemplified in relation to your study, and connected into a coherent structure, or framework. 

scrabble mess

Once you have what Bernstein called the ‘internal language of description’ for your study – your study’s own account of the theory it will be using and why this theory is the most appropriate choice for this study – you can generate, or analyse generated, data. This is where theory becomes the big deal that it is. Theory is transformed when it is brought into contact with data. It stops being quite so abstract, and becomes more alive and real. It actually helps you to say something about why you see what you do in your data, and what the things you see actually could mean, connected to the larger picture. It helps you create an ‘external’ language of description – a translation device as Maton puts it – which transforms theory in the abstract into an analytical language that can describe and make meaning of data. Other researchers can draw on, adapt, and add to this in their own studies, further amplifying the value of your research.

For example, several students have told you that no one will assist them with supervisor issues. rather than saying that this is just an unsupportive environment, you can use theory that gives you insight into power and university cultures around autonomy. With this insight, you could postulate that the environment is structured so as to give administrators and supervisors way more power than students, and with that power they can maintain an unsupportive status quo. Perhaps this unsupportive environment is created and maintained with the (misguided) notion that students need to be autonomous and independent, but you can now critique this with your data and theory to show why this doesn’t actually work. And you could back up this postulation with reference to other studies that have made similar or related arguments.

Instead of just a small story about your data, and why you think it is interesting, you now have a potentially powerful analysis of the data that says what is means, why this meaning is important to pay attention to, and how this meaning connects with other meanings, thus making a contribution to research in your field.

research-2

Theory isn’t just an odd requirement that has to be met in postgraduate research. It also is not some sort of relic of an ‘elitist’ version of higher education (one criticism I have heard a few times now). It’s a tool: it helps us really say something important and valuable about the world around us. We need to be doing research that connects us to other people, other research, other meanings, so that all of these meanings and arguments can build on one another cumulatively, amplifying our findings and voices. If what we want is better understanding of problems, new solutions to old problems and powerful change, then we need to harness the power theory offers us as researchers and use it to help us achieve these goals.

Putting your theory to work in analysis

You now have generated data – in some form, whether primary or secondary – and now you need to code and make sense of it; you need to put it to the task of answering your research question(s). In other words: analysis. This was the toughest part of my own PhD: I had a mountain of data – how to choose the right pieces? What to say about them? How to make sense of them in relation to my research questions?

This is where theory and concepts come into their own in a PhD or MA. You will have some form of theoretical or conceptual framework (for clarity on theory and concepts, how they differ and work together, please watch this short video). Where students often go off track, though, is not using these concepts or theory to do the work in analysis. The theoretical or conceptual framework ends up standing alone, and some form of thematic description of the data is made, with a rather thin version of analysis. In this situation, it may be difficult to offer a credible answer to your research question.

Analysis is, in essence, an act of sense-making. It requires you to move beyond a common sense, everyday understanding of the world, and your data – the level of the descriptive – to a theorised, non-common sense understanding – the level of the analytical (and critical). Analysis means connecting the specific (your study and its data) with the general (a phenomenon, theory, concept, way of looking at the world) that can help to explain how the specific fits in with, or challenges, or exemplifies the general. If you do not make this move, all you may end up with is a set of data that describe a tiny piece of the world, but with little or no relevance to anyone else’s research except perhaps the few other people researching the same thing you are.

theory specs 2

So, how might you ‘do’ analysis?

Imagine you are doing a study on the role of reflective learning in building students’ capacity to critique and create professional knowledge that encourages ongoing learning and problem-solving. ‘Reflection’, or ‘reflective practice’ would be a key concept, as would ‘professional knowledge’, ‘problem-solving’, and ‘learning’. These have generalised, or conceptual meanings that could apply in a range of ways, depending of the parameters and questions of a specific study. Thus, they can do analytical work, helping you to theorise as you answer your research questions.

Then imagine your data set is assessment tasks completed by students in social work and accounting, as two professional disciplines which require adaptive, ongoing learning and problem-solving. You now need a way of employing your key concepts in analysis. You could look at the intentions of the task questions – how they do, or do not, explicitly or actively enable or encourage problem-solving and reflective thinking and learning, and then look at students’ responses and see the extent to which the desired forms of learning are visible or not. This would yield useful findings to feed back to these disciplines in using assessment more effectively.

To reach theorised findings that go beyond describing what the tasks and the student writing said, and conjecture about what the tasks and written responses mean in relation to your study’s understanding of professional knowledge, learning, problem-solving and reflection, you need to start with questions.

theory giphy

For example: these tasks seem to be using direction words such as ‘name’, ‘list’, ‘describe’, ‘mention’, which require mainly memorising, or learning the notes in a rote manner. What kind of learning would this encourage? What impact would this have on students’ ability to move on to more analytical tasks? Is there a progression from ‘memorisation’ towards ‘problem-solving’ or using knowledge to reflect on and learn from case studies etc? What kind of progression is there? Is it sensible, or not, and how could this affect students learning? And so on.

You could then present the data: e.g., this is the task, and this is when students work on this task in the semester or progression of the course, and this is the task that follows (show us what these look like by copying them out, or including photographs). This part of the analysis is quite descriptive. But then you pose and answer relevant questions guided by your overall research objectives: if these two disciplines – social work and accounting – require professional learning and knowledge that is built through reflection, and the capacity to USE rather than just KNOW the knowledge in the field so that professional can adapt, continue learning, and solve complex problems, what kinds of assessment tasks are needed in higher education? Do the tasks students are doing in the courses I am studying here do these kinds of tasks? If yes, how are they working to build the rights kinds of knowledge, skills and aptitudes? If no, what might be the outcome for these students when they graduate and move into the professions? You then have to use the concepts you have pulled together to create a theorised understanding of professional reflective learning to pose credible answers, that are substantiated with your data (as evidence). This is the act of analysis.

analysis

In both qualitative and quantitative studies, the theory or concepts you choose, and the data you generate, are informed by your research aims and objectives. And in both kinds of studies, analysis requires moving beyond description to say something useful about what your data means in relation to the general phenomenon you are connecting with, and that informs your theorisation (student learning, climate change, democratic governance, etc). Thus, you need to work – iteratively and in incremental stages – to bring your theory to your data, to make sense of the data in relation to the theory so that your study can make a contribution that speaks both to those within your research space, and those beyond it who can draw useful conclusions and lessons even if their data come from somewhere else.

 

Iterativity in data analysis: part 2

This post follows on from last week’s post on the iterative process of doing qualitative data analysis. Last week I wrote a more general musing on the challenges inherent in doing qualitative analysis; this week’s post is focused more on the ‘tools’ or processes I used to think and work my way through my iterative process. I drew quite a lot on Rainbow Chen’s own PhD tools as well as others, and adapted these to suit my research aims and my study (reference at the end).

The first tool was a kind of  ’emergent’ or ‘ground up’ form of organisation and it really helps you to get to know your data quite well. It’s really just a form of thematic organisation – before you begin to analyse anything, you have to sort, organise and ‘manage’ your mountain of data so that you can see the wood for the trees, as it were. I didn’t want to be overly prescriptive. I knew what I was looking for, broadly, as I had generated specific kinds of data and my methodology and theorology were very clearly aligned. But I didn’t really know what exactly all my data was trying to tell me and I really wanted it to tell its story rather than me telling it what it was supposed to be saying. I wanted, in other words, for my data to surprise me as well as to show me what I had broadly hoped to find in terms of my methodology and my theoretical framework.  So, the ‘tool’ I used organised the data ‘organically’ I suppose – creating very descriptive categories for what I was seeing and not trying to overthink this too much. As I read through my field notes, interview transcripts, video transcripts, documents, I created categories like ‘focusing on correct terminology’ and ‘teacher direction of classroom space’ and ‘focus on specific skills’. The theory is always informing the researcher’s gaze, as Chen notes in her paper (written with Karl Maton) but to rush too soon to theory can be a mistake and can narrow your findings. So my theory was here, underpinning my reading of the data, but I did not want to rush to organise my data into theoretical and analytical ‘codes’ just yet. There was a fair bit of repetition as I did this over a couple of weeks, reading through all my data at least twice for each of my two case studies. I put the same chunks of text into different categories (a big plus of using data software) and I made time to scribble in my research journal at the end of each day during this this process, noting emerging patterns or interesting insights that I wanted to come back to in more depth in the analysis.

An example of my first tool in action

An example of my first tool in action

The second process was what a quantitative researcher might call ‘cleaning’ the data. There was, as I have noted, repetition in my emergent categories. I needed to sort that out and also begin to move closer to my theory by doing what I called ‘super-coding’ – beginning to code my data more clearly in terms of my analytical tools. There were two stages here: the first was to go carefully through all my categories and merge very similar ones, delete unnecessary categories left over after the merging, and make sure that there were no unnecessary or confusing repetitions. I felt like the data was indeed ‘cleaner’ after this first stage. The second stage was to then super-code by creating six overarching categories, names after the analytical tools I developed from the theory. For example, using LCT gave me ‘Knowers’, ‘Knowledge’, ‘Gravity’ and ‘Density’. I was still not that close to the theory here so I used looser terms than the theory asks researchers to use (for example we always write ‘semantic gravity’ rather than just ‘gravity’). I then organised my ‘emergent’ categories under these headings, ending up with two levels of coded data, and coming a step closer to analysis using the theoretical and analytical tools I had developed to guide the study.

By this stage, you really do know you data quite well, and clearer themes, patterns and even answers to your questions begin to bubble up and show themselves. However, it was too much of a leap for me to go from this coding process straight into writing the chapter; I needed a bridge. So I went back to my research journal for the third ‘tool’ and started drawing webs, maps, plans for parts of my chapters. I planned to write chunks, and then connect these together later into a more coherent whole. This felt easier than sitting myself down to write Chapter Four or Chapter Five all in one go. I could just write the bit about the classroom environment, or the bit about the specialisation code, and that felt a lot less overwhelming. I spent a couple of days thinking through these maps, drawing and redrawing them until I felt I could begin to write with a clearer sense of where I was trying to end up. I did then start writing, and working on the chapters, and found myself (to my surprise, actually) doing what looked and felt like and was analysis. It was exciting, and so interesting – after being in the salt mines of data generation, and enduring what was often quite a tedious process of sitting in classrooms and making endless notes and transcribing everything, to see in the pile of salt beautiful and relevant shapes, answers and insights emerging was very gratifying. I really enjoyed this part of the PhD journey – it made me feel like a real researcher, and not a pretender to the title.

One of my 'maps'

Another ‘map’ for chapter writing

A different 'map' for writing

A ‘map’ for writing

This part of the PhD is often where we can make a more noticeable contribution to the development, critique, generation of new knowledge, of and in our fields of study. We can tell a different or new part of a story others are also busy telling and join a scholarly conversation and community. It’s important to really connect your data and the analysis of it with the theoretical framework and the analytical tools that have emerged from that. If too disconnected, your dissertation can become a tale of two halves, and can risk not making a contribution to your field, but rather becoming an isolated and less relevant piece of research. One way to be more conscious of making these connections clear to yourself and your readers is to think carefully about and develop a series of connected steps in your  data analysis process that bring you from you data towards your theory in an iterative and rich rather than linear and overly simplistic way. Following and trying to trust a conscious process is tough, but should take you forward towards your goal. Good luck!

keep calm

 

Reference: Chen, T-S. and Maton, K. (2014) ‘LCT and Qualitative Research: Creating a language of description to study constructivist pedagogy’. Draft chapter (forthcoming).

 

Data: collecting, gathering or generating?

I’m thinking about data again – mostly because I am still in the process of collecting/gathering/generating it for my postdoctoral research. I had a conversation with a colleague at a conference I went to recently who talks about ‘generating’ his data – colleagues of mine in my PhD group use this term too – but the default term I use when I am not thinking about it is still ‘collecting’ data. I’m sure this is true for many PhD scholars and even established researchers. I don’t think this is a simple issue of synonyms. I think the term we use can also indicate a stance towards our research, and how we understand our ethical roles as researchers.

Collect (as other PhD bloggers and methods scholars have said) implies a kind of linear, value-free (or at least value-light) approach to data. The data is out there – you just need to go and find it and collect it up. Then you can analyse it and tell your readers what it all means. Collect doesn’t really capture adequately, for me, the ethical dilemmas that can arise, large and small, when you are working in the ‘field’. And one has to ask: is the data just there to be collected up? Does the data pre-exist the study we have framed, the questions we are asking, and the conceptual and analytical lenses we are peering through? I don’t think it does. Scientists in labs don’t just ‘collect’ pre-existing data – experiments often create data. In the social sciences I think the process looks quite different – we don’t have a lab and test tubes etc – but even if we are observing teaching or reading documents, we are not collecting – we are creating. Gathering seems like a less deterministic type of word than collecting, but it has, for me, the same implications. I used this word in my dissertation, and if I could go back I would change it now, having thought some more about all of this.

Generating seems like a better word to use. It implies ‘making’ and ‘creating’ the data – not out of nothing, though; it can carry within it the notions of agency of the researcher as well as the research participants,  and notions of the kinds of values, gazes, lenses, and interests that the parties to the research bring to bear on the process. When we generate data we do so with a particular sense in mind of what we might want to find or see. We have a question we are asking and need to try and answer as fully as possible, and we have already (most of the time) developed a theoretical or conceptual gaze or framework through we we are looking at the data and the study as a whole. We bring particular interests to bear, too. If, as in my study, you are doing research in your own university, with people who are also your colleagues in other parts of your and their working life, there are very particular interests and concerns involved that impact not just on what data you decide to generate, but also how you look at it and write about it later on. You don’t want to offend these colleagues, or uncover issues that might make them look bad or make them uncomfortable. BUT, you also have a responsibility, ethically, to protect not just yourself but also the research you are doing. Uncomfortable data can also be very important data to talk about – it can push and stretch us in our learning and growth even as it discomforts us. But this is not an easy issue, and it has to be thought about carefully when we decide what to look at, how and why.

These kinds of considerations, as one example, definitely influence a researcher’s approach to generating, reading and analysing their data, and it can help to have a term for this part of the research process that captures at least some of the complexity of doing empirical work. For now, I am going to go with others on this and use ‘generating’. Collecting and gathering are too ‘thin’ and capture very little if any of the values, interests, gazes and so forth that researchers and research participants can bring to bear on a study. Making and creating – well, these are synonyms for generating, but at the moment my thinking is that they make it sound too much like we are pulling the data out of nothing, and this is not the case either. The data is not there to be gathered up, nor is it completely absent prior to us doing the research. In generating data, we look at different sources – people, documents, situations – but we bring to bear our own vested interests, values, aims, questions, frameworks and gazes in order to make of what we see something different and hopefully a bit new. We exercise our agency as researchers, not just alone, but in relation to our data as well. Being aware of this, and making this a conscious rather than mechanical or instrumental ‘collection’ process can have a marked impact, for the better I think, on how ethically and responsibly we generate data, analyse it and write about down the line.