A part of my thesis work is to investigate the reliability of the Force Concept Inventory through interviews with the students. I have spent quite some time transcribing, and my next step is therefore to analyse these ones. For this, I choose to employ a recently suggested network approach for qualitative discourse analysis as described in Bruun et al (2018).
The method intrigues me as it could relate well to my network approach for analysing the Force Concept Inventory. Of course, the author of both methods is also my supervisor for this project, so a natural introduction to both approaches became a part of our discussions.
Potentially, if the method yields meaningful interpretations of the interviews, it would be interesting to see if the resulting networks are anything similar to those emerging from the FCI analysis. Simply put: Will the modules from each interview resemble each student’s interpreted response pattern in the test?
The method is described explicitly in this document: R-code for network analysis and qualitative discourse analysis of a classroom group-discussion, although I have taken a slightly different approach. I couldn’t get their code to work for me, and I am slightly more fond of Python for language processing tasks, so I wrote a Python script to create the edge lists, and another script in R to create the network graphs. The edge lists were saved in a DataFrame format that could easily be read into R using the .feather format. You simply import Feather as a package in both Python and R-studio and can then create and read files in the .feather format. This all worked very well, and I will describe my method more thoroughly when it is more fully developed. Temporary results might be more promising when the text mining is more refined; as of now the graphs do not yield any particular thematics and only seem to trace the conversation content: