Survey data: Do you read all the comments?
May 31, 2023Things we’ve learned in 20 years (part 1)
June 28, 2023Using ChatGPT to read survey comments
The first time I took some survey comments and asked ChatGPT to summarise them – I got an error.
A few errors later, and then my first proper result and my first reaction was “oh – this is good.” A little tweak and then “Wow!” I had a neat little summary of my 500 responses.
But hang on. It looked right but it wasn’t complete.
I knew this data well and what I was seeing felt like a quick cursory scan where the AI had found the most common points (a bit like blagging an answer). I asked more questions to help confirm or refine the results I was getting, but all I ended up with were basic summaries.
This isn’t fair on ChatGPT. I am clearly on beginner-level and certainly making mistakes as I figure out how ChatGPT works. My first observation is it’s not plug n play. While AI can automate many things, there remains a crucial requirement for learning – not just the technical setup but also how to ask the questions and how the AI responds. So much depends on the questions you ask.
I was also surprised that simply re-asking the question generated different results. These differences might be significant.
I’ve seen a few reports of ChatGPT making things up but I didn’t see any examples in my tests. For me, it wasn’t a case of making up themes that don’t exist, more – what themes are missing from the summary, and accuracy of results. I had the same challenge with a second set of survey comments for another test.
My second observation is about our data. Amongst the survey comments we tend to collect are wonderful gems that paint a very clear picture, but many more are shorter answers, 4 or 5 words and incomplete sentences. Most make sense in the context of the question they answer, but some are ambiguous and open to interpretation. What does the AI make of those comments?
Regardless, you would almost certainly want to scan through comments for each theme to learn what/how they’re being discussed.
My last observation is the AI isn’t interpreting the meaning of the feedback. My initial surprise at how quickly I had a fairly good summary, albeit incomplete, led me to expect more (unfairly). My testing was just prompts churning out qualitative answers. It is still left to you to react to that output to ‘dig deeper’ or draw conclusions.
Conclusions
My initial ‘wow’ has fallen to a ‘this might be interesting’. If you have large(ish) sets of comments data, AI will reduce the amount of time and effort to create a summary of what’s being said (potentially this could be huge). It doesn’t replace reading the comments – getting a feel for the way things are said and their tone or emotion. You definitely need to factor in reviewing the output for accuracy.
I am wary how quickly this post will date because ChatGPT and other AI tools are rapidly getting better and faster. Could be quite exciting…
Further reading
This post came out of Survey data: Do you read all the comments? – where the first thought was can’t ChatGPT do it? Have you used AI to help with your survey? What did you think? Make contact here