Skip to main content

Table 1 The classification metrics for each balanced random forest algorithm. Accuracy is defined as the proportion of an observed class that was correctly classified. Precision is defined as the proportion of a predicted class that was correctly classified. Kappa can be interpreted as the percent improvement in overall accuracy of a classifier compared with the expected overall accuracy of a random classifier

From: Evaluation of different radon guideline values based on characterization of ecological risk and visualization of lung cancer mortality trends in British Columbia, Canada

 

Threshold in Bq m−3

Lower-than-threshold Accuracy

Lower-than-threshold Precision

Higher-than-threshold Accuracy

Higher-than-threshold Precision

Kappa

Kappa Gain

a)

600

0.81

0.97

0.69

0.22

0.25

0

b)

500

0.83

0.96

0.72

0.32

0.36

0.11

c)

400

0.83

0.96

0.74

0.37

0.39

0.03

d)

300

0.8

0.94

0.73

0.42

0.41

0.02

e)

200

0.8

0.91

0.76

0.55

0.49

0.08

f)

150

0.77

0.88

0.76

0.6

0.5

0.01

g)

100

0.79

0.86

0.83

0.75

0.61

0.11

h)

50

0.77

0.8

0.86

0.84

0.63

0.02

  1. Values in bold indicate the highest value between threshold models