There is significant buzz on Twitter, in the media and in corridor conversations about AI (artificial intelligence) and the impact new platforms like ChatGPT are having and will have on academic writing, knowledge-making and on research. There’s s a good deal of scare-mongering out there about Large Language Models like ChatGPT that seem to ‘know’ a lot about the world and could, therefore, generate pieces of writing that may be hard to tell from human writing (as in a human wrote it and not an AI). Stories abound online about people using this tool to write children’s books, to assist with legal judgments, to write letters to their local councils, and to write academic texts. But, there is also a good deal of excitement around what tools like ChatGPT can offer researchers, writers, and ordinary people. I’m going to focus a bit on this latter part in this post, and grapple with using AI tools for research in more transparent and ethical ways (which is new territory for many of us).
I am working on a couple of short book chapters at the moment, and an abstract for a special issue. I have been battling with writing sharp, clear titles. Titles are always a challenge for me. The ones I write tend to be too long, too wordy, convoluted. I don’t like most of the titles I write and it takes me ages to get to a final version I can live with. But, after watching this short video by Lynette Pretorius on asking ChatGPT for help with writing a research question, I decided to see if it could help me refine my clunky titles. I created an account, logged in and asked for help:

I answered the cheerful response by pasting in the abstract I have written for this chapter.

And, ChatGPT responded with this:

I am now able to consider these suggestions and refine or add to them myself. They’re starting points for creative thinking and writing. I am thinking of a mix of 2 and 5 actually, as we are trying to talk about amplifying students’ voices, and we want to use the idea of feedback conversations. The original title I wrote has ‘doctorateness’ in the title, but that is, in this context, jargon and it’s not usually a good idea to use jargon in an article or chapter title as it can put readers off if they don’t know what you are writing about. So, maybe something like: Amplifying candidates’ voices: Using feedback conversations to develop researcher identity and doctoral writing.
I have asked ChatGPT to help me with the other book chapter title and a title for a paper I am proposing for a journal special issue. In both instances, the five suggestions sparked further thinking and I have been able to revise my titles building on and refining the suggestions I have received. It also helped me find a name for my new podcast. What is important to note here is that ChatGPT did not do the work for me without me having to do any thinking and writing. It cannot generate suggestions out of nothing. And the less you give it, the less helpful the suggestions will be. I saw this more as a conversation with a generative AI tool that has been trained on billions of words from all over the publicly available internet, including all of Wikipedia and possibly my own open access papers and blogposts. I gave it the information I had been able to generate from my own reading and thinking, and it pushed me a little further by offering suggestions to spark more thinking and writing.
In the case of my podcast name, I didn’t love the first round of suggestions so I asked it to think again. By clicking on ‘regenerate response’ it created a further 10 suggestions for me to play with and think about. I realised that the information I gave it was a bit thin on detail, so I added more detail to its request for information about the podcast and it created better suggestions for me to consider. It was fun to have this conversation with the AI because it responded so enthusiastically, which is always lovely, and because it pushed me to really think hard about what I am trying to do, the identity of my podcast and my audience, what I want to say and add to the conversation. I hadn’t put it all down into a coherent paragraph and it was a bit harder than I thought it would be. The suggestions helped me to work out what I do and do not want to say or be in this podcast – for example, am I am ‘activist’ academic here? I don’t really think so (Suggestion 3: The Academic Activist: Fighting for Equity and Justice). Am I really going to be breaking barriers or is that too ambitious? (Suggestion 6: Breaking Barriers in Academia: Perspectives from Scholars Across the Globe). Again, the AI helped me with my own thinking rather than stepping in and replacing me as an author and thinker.
I am sure, because I have experimented, that it can write text based on very short prompts (I tried: What is radical feminist theory all about?) and got this response.

This needs some work as an academic text I could build into an essay or part of a thesis chapter – I would need proper academic references to support the claims being made, and I would need to link these ideas to my own argument or writing purpose. Otherwise, this is just information, rather than knowledge being constructed as part of a core claim/argument/assignment focus. Academic writing is not about compiling information, it’s about constructing a response to a question. In a doctoral or Master’s thesis, the questions are those you create as a researcher and pose (in the form of a question or hypothesis). And your whole thesis – every sentence, paragraph, section and chapter, is built by you through reading, thinking, getting feedback, revising, generating data, analysing that data through a process of theorisation and interpretation, and pulling all of that together to answer your question(s) and make a contribution to knowledge. And ChatGPT can help you take a clunky paragraph and revise it, or a convoluted research question and refine it. But it can’t construct a thesis – or even a chapter of a thesis – in which the claims you make and evidence you cite are arranged in response to an argument you are making, and that are selected and connected in relation to your research aims and objectives. Only a thinking human writer can do that.
The example above is a bit like Wikipedia, right? What is this thing I am trying to write about in simple terms? Often, we get lost in trying to sound academic and we get tangled up in big words and big meanings and long sentences and we forget that what we are trying to do, at the core of our work, is communicate with readers. We do that best when our meanings are clear, simply stated (not simplistic), accessible. A tool like ChatGPT can help you do that. Not do your writing for you, but help you look at writing with fresh eyes, maybe, and help you refine and play with some of your own writing in ways that push you forward a step. If you use ChatGPT or a similar LLM model to do this, you need to make that clear. The conventions on citing ChatGPT as a contributor to some of your thinking or writing are far from clear. But in my chapter, I am planning to include an endnote that indicates that the title for the chapter was written with the assistance of ChatGPT, which helped the authors to refine and polish earlier versions of the title. I don’t know what will happen, but I think it would unethical not to acknowledge the help I have received.
I am reserving comment on the extent to which AI tools like this pose a danger to the world of academic or scholarly meaning and knowledge-making. It’s early days and we’re not fully sure yet what these tools can do and what the limits of their capacity for learning, generation, and mimicry are. But I will say that we have used AI in academia for ages – Grammarly, Research Rabbit, Semantic Scholar, Elicit, Turnitin, and although these tools are not generative in the ways ChatGPT and other LLM models are, AI is not new and nor is our reliance on it. I think we need to learn as much as we can about these tools, try them, work with them, and understand what they can and cannot do. And then talk about them – with our co-researchers, with our students, with academic developers in our universities, so that we can collectively respond in appropriate ways as opposed to issuing blanket bans on tools that, as I am finding, can be helpful when used transparently and carefully. I would really love to hear your thoughts on this if you have tried this or other AI tools and found them helpful and/or alarming.
*Featured image by Andrei Hasperovich (Adobe Stock)