You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ideally, any comparison framework should be able to discern a Random classifier from real ones by their accuracy results, even if the data does not quite satisfy the requirements pointed out in Demsar, 2006 (N > 10, k > 5).
Average rank per model, according to accuracy
Polygrid 1.00
Ridge 3.25
Linear 4.00
BRRF 4.50
DT 5.00
BRDT 5.50
MLP 5.50
RF 7.25
Random 9.00
but all classifiers are included in a single group:
Groups of models with statistically indistinguishible performance:
Group 1: ['BRDT', 'BRRF', 'DT', 'Linear', 'MLP', 'Polygrid', 'RF', 'Random', 'Ridge']
The text was updated successfully, but these errors were encountered:
Ideally, any comparison framework should be able to discern a Random classifier from real ones by their accuracy results, even if the data does not quite satisfy the requirements pointed out in Demsar, 2006 (N > 10, k > 5).
The data:
produces the correct ranking,
but all classifiers are included in a single group:
The text was updated successfully, but these errors were encountered: