Astrophysics (Index)About

balanced accuracy

(BA)
(average of true positive and negative rates)

Balanced accuracy is a measure used for evaluating classifiers (i.e., procedures aimed at classifying items for which you have objective evidence). Given some items that are otherwise classified by some trusted means, such a classifier can be evaluated by using it to attempt to classify each of those known items, to test the classifier's ability. The fraction of some particular class that it correctly classifies is its sensitivity (true positive rate) regarding that class, and the fraction of those not in that class which it correctly specifies as not is its specificity (true negative rate). Balanced accuracy refers to the average of those two measures.

The term machine learning (ML) is now commonly used for software that creates classifier-procedures, evaluates their performance with known data, then modifies the classifier-procedures and continues, evaluating those, in order to develop effective ones. Sensitivity, specificity and balanced accuracy are among the tests that can be used for the evaluation step.


(measure,statistics,machine learning)
Further reading:
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
https://www.statology.org/balanced-accuracy/
https://statisticaloddsandends.wordpress.com/2020/01/23/what-is-balanced-accuracy/
https://stephenallwright.com/balanced-accuracy/
https://neptune.ai/blog/balanced-accuracy

Index