Skip to main content

Table 2 Summary table of inter-observer agreement statistics for the training (Uganda) and validation (Kenya) data sets

From: Analysing 3429 digital supervisory interactions between Community Health Workers in Uganda and Kenya: the development, testing and validation of an open access predictive machine learning web app

 

Training data set (Uganda)

Validation data set (Kenya)

Human coder agreement [observed % agreement; Cohen’s Kappa value (SE; CI)]

88%; 0.7 (SE 0.04; CI 0.63–0.78)

95%; 0.91 (0.091; 0.895–0.937)

CHWsupervisor web app predication accuracy [observed % agreement; Cohen’s Kappa value (SE; CI)]

78%; 0.56 (0.04; 0.49–0.64)

73%; 0.51 (SE 0.021; CI 0.473–0.556)

  1. A summary of observed percentage agreements and Cohen’s Kappa values for inter-human-coder agreements and CHWsupervisor web app prediction accuracy across the training and validation test sets.