Discussion: The quantitative approach to ROI evaluation is a viable tool for ensuring the quality of content analysis. Kappa`s values and an accurate review of contract rates contribute to the estimation and improvement of code quality. This approach facilitates good coding practices and enhances the credibility of analyses, particularly when large samples are interviewed, different programmers are included and quantitative results are presented. Craig R.T.: Generalization of Scott`s index of the Intercoder agreement. Public Opin. Q. 45 (2), 260-264 (1981) Popping R.: Nominal Scale Agreement. In: Kotz, S., Johnson, N.L. (eds) Encyclopedia of Statistical Sciences, 261-264. Wiley, New York (1985) Scott W.A.: Reliability of content analysis: the case of nominal scale coding.
Public Opin. Q. 19 (3), 321-325 (1955) Galtung, J.: Measurement of agreement. In: Galtung, J. (note) Papers on Methodology. Theory and Methods of Social Research, Vol. Christian Eijlers, Copenhagen (1979) Tinsley H.E.A., Weiss D.J.: Interrater Reliability and Agreement. In: Tinsley, H.E.A., Brown, S.D. (eds) Handbook of Applied Multivariate Statistics and Mathematical Modelling, 95-124. Academic Press, San Diego (2000) Results: First, a coding scheme was developed with a comprehensive inductive and deductive approach.
Second, 10 transcripts were coded independently of two researchers, and the ROI was calculated. The resulting cappa value of 0.67 can be considered satisfactory on firm. In addition, different approval rates helped identify problems in the coding system. For example, low approval rates suggest that the relevant codes are too broad and need to be clarified. In a third phase, the results of the analysis were used to improve the coding system, resulting in consistent, high-quality results. Background: High reliability of the intercoder (RCI) is required for qualitative content analysis to ensure quality if multiple coders participate in data analysis. In the literature, there are no standardized methods for RCI procedures in qualitative content analysis. Light R.J.: Measures of response agreement for qualitative data: some generalizations and alternatives. Psychol. Bull.
76 (5), 365-377 (1971) Landis J.R., Koch G.G.: The measurement of observer agreement for categorical data. Biometrics 33 (1), 159-174 (1977) Cohen J.: A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20 (1), 37-46 (1960) This is an overview of the content of the subscription, sign up to check the access. Potter W.J., Levine-Donnerstein D.: Proven validity and reliability in content analysis. J. Appl. common.