I'm using the Weka Experimenter to evaluate the performance of a number of learning algorithms on a single dataset using 10 times 10-fold-cross-validation. When extracting the results from the experiment I get identical results for both the "Percent_correct" (i.e. accuracy) and the "Weighted_avg_true_positive_rate" (i.e. recall or sensitivity) metric. Even the standard deviation is identical (see attached table).

Is this really correct? Accuracy and recall are calculated totally different so I have a hard time understanding how they can be identical for all 16 algorithms that I evaluate. I have verified these results on different versions of Weka (3.6.14 and 3.7.7) and also directly using my own Java program relying on the Weka API.

The data in the dataset consists of 4 ordinal class variables (see confusion matrix figure for more info).

I hope someone can shed some light on this. Thanks in advance.

/M

Name:  Skärmavbild 2016-06-07 kl. 22.12.18.jpg
Views: 233
Size:  20.1 KBName:  Skärmavbild 2016-06-07 kl. 22.11.58.jpg
Views: 236
Size:  50.6 KB