Receiver operating characteristic curve
The receiver operating characteristic (ROC) curve is a statistical relationship used frequently in radiology, particularly with regards to limits of detection and screening.
The curves on the graph demonstrate the inherent trade-off between sensitivity and specificity:
- y-axis: sensitivity
- x-axis:1-specificity (false positive rate)
A perfect test would be perfectly sensitive and have no false positives (100% specific). This curve would go through the point at the very top left corner (Figure 1).
A worthless diagnostic test would be no better than chance (a straight line through the origin).
One can generate an ROC curve between these two extremes by determining the cut-off points for sensitivity and specificity given the modality, disease, and patient population in question (Figure 2).
The curve can also be used to find the point at which the trade-off between sensitivity and specificity is optimal (i.e. the apex of the curve).
Area under the curve (AUC)
Determining the area under the curve (AUC), allows one to compare different tests. The greater the area under the curve (up to the maximum 1.0), the more accurate the test is (both better sensitivity and specificity).
An inexact guide to AUC values , modeled on the traditional academic points system , is given below:
- 0.90-1.00: excellent accuracy
- 0.80-0.90: good accuracy
- 0.70-0.80: fair accuracy
- 0.60-0.70: poor accuracy
- 0.50: no discriminating ability
Practical points
Although the apex of the curve is the point of best trade-off in terms of sensitivity and specificity, the actual clinical diagnostic situation determines where on the ROC curve one would like to operate
History and etymology
The receiver operating curve analysis arose out of observations of differences in signal detection by radar operators in WWII .