ROC 곡선 AUC - ROC gogseon AUC

ROC curve (Receiver Operating Characteristic curve) 

- TPR과 FPR을 각각 x,y축으로 놓은 그래프 

ROC 곡선 AUC - ROC gogseon AUC
ROC curve

양분된 결과를 예측하는 테스트의 정확도를 평가하기 위하여 흔히 두 가지 지표, 민감도(sensitivity)와 특이도(specificity)를 사용

ex) 어떤 건강 상태를 가지고 있는 경우와 그렇지 않은 경우를 얼마나 잘 구분할 수 있는지를 의미 

민감도(sensitivity) 
- 1인 케이스에 대해 1이라고 예측한 것 
특이도(specificity) 
- 0인 케이스에 대해 0이라고 예측한 것 

ROC curve는 이진분류기의 성능을 한번에 표시한 것으로, 양성률(TPR)과 위양성율(FPR) 두 가지로 표현

양성율(True Positive Rate; TPR) 
- TPR = 민감도 = 1 - 위음성율, true accept rate 
- 1인 케이스에 대해 1로 맞게 예측한 비율 
- ex) 암환자를 진찰해서 암이라고 진단 함 
위양성율(False Positive Rate; FPR) 
- FPR = 1 - 특이도, false accept rate 
- 0인 케이스에 대해 1로 잘못 예측한 비율 
- ex) 암환자가 아닌데 암이라고 진단 함 

TPR과 FPR의 관계

- 암환자를 진단할 때, 성급한 의사는 아주 조금의 징후만 보여도 암인 것 같다고 할 것이다. 

- 이 경우 TPR은 1에 가까워지고 FPR도 높아진다. (정상인 사람도 다 암이라고 하니까)

- 반대로 돌팔이 의사라서 암환자를 알아내지 못한다면, 모든 환자에 대해 암이 아니라고 할 것이다. 

- 이 경우 TPR은 매우 낮아져 0에 가까워지고 FPR도 낮아진다.(암환자라는 진단 자체를 안하므로, 암 

환자라고 잘못 진단 하는 경우가 없음) 

이처럼 TPR과 FPR은 둘다, 어떤 기준(언제 1이라고 예측 할 지)을 연속적으로 바꾸면서 측정 해야한다. 

결국 TPR과 FPR의 여러가지 상황을 고려해서 성능을 판단해야 하는데, 이것을 한눈에 볼 수 있게 한 것이 바로 ROC 커브이다. 

그래서 ROC커브는 이것들을 그래프로 표현하여, 어떤 지점을 기준으로 잡을 지 결정하기 쉽게 시각화 한 것이다. 

ROC커브의 밑면적(the Area Under a ROC Curve; AUC; AUROC) 

- ROC 커브의 X,Y축은 [0,1]의 범위며, (0,0) 에서 (1,1)을 잇는 곡선 

- ROC 커브의 밑 면적이 1에 가까울수록(즉, 왼쪽 상단 꼭지점에 다가갈수록) 좋은 성능 

- 이때의 면적(AUC)은 0.5~1의 범위를 가짐(0.5면 성능이 전혀 없음. 1이면 최고의 성능) 

AUC 해석 

- 쉽게 1로 예측하는 경우 민감도는 높아지지만 모든 경우를 1이라고 하므로 특이도가 낮아진다. 

- 그러므로 이 민감도와 특이도 모두 1에 가까워야 의미가 있음 

- 따라서ROC커브를 그릴때 특이도를 1-특이도를 X축에 놓고, Y축에 민감도를 놓는다. 

- 그러면 x=0일때 y=1인 경우 최적의 성능이고, 점점 우측 아래로 갈수록, 즉 특이도가 감소하는 속도에비해 얼마나 빠르게 민감도가 

증가하는지를 나타냄. 

- AUC값은 전체적인 민감도와 특이도의 상관 관계를 보여줄 수 있어 매우 편리한 성능 측정 기준임 

AUC = 0.5인 경우 

- 특이도가 감소하는만큼 민감도가 증가하므로 민감도와 특이도를 동시에 높일 수 있는 지점이 없음 

- 특이도가 1일때 민감도는 0, 특이도가 0일때 민감도는 1이되는 비율이 정확하게 trade off관계로, 두 값의 합이 항상 1임  

*참고문헌

- https://angeloyeo.github.io/2020/08/05/ROC.html

AUC-ROC Curve – The Star Performer!

You’ve built your machine learning model – so what’s next? You need to evaluate it and validate how good (or bad) it is, so you can then decide on whether to implement it. That’s where the AUC-ROC curve comes in.

The name might be a mouthful, but it is just saying that we are calculating the “Area Under the Curve” (AUC) of “Receiver Characteristic Operator” (ROC). Confused? I feel you! I have been in your shoes. But don’t worry, we will see what these terms mean in detail and everything will be a piece of cake!

ROC 곡선 AUC - ROC gogseon AUC

For now, just know that the AUC-ROC curve helps us visualize how well our machine learning classifier is performing. Although it works for only binary classification problems, we will see towards the end how we can extend it to evaluate multi-class classification problems too.

We’ll cover topics like sensitivity and specificity as well since these are key topics behind the AUC-ROC curve.

I suggest going through the article on Confusion Matrix as it will introduce some important terms which we will be using in this article.

Table of Contents

  • What are Sensitivity and Specificity?
  • Probability of Predictions
  • What is the AUC-ROC Curve?
  • How Does the AUC-ROC Curve Work?
  • AUC-ROC in Python
  • AUC-ROC for Multi-Class Classification

What are Sensitivity and Specificity?

This is what a confusion matrix looks like:

ROC 곡선 AUC - ROC gogseon AUC

From the confusion matrix, we can derive some important metrics that were not discussed in the previous article. Let’s talk about them here.

Sensitivity / True Positive Rate / Recall

ROC 곡선 AUC - ROC gogseon AUC

Sensitivity tells us what proportion of the positive class got correctly classified.

A simple example would be to determine what proportion of the actual sick people were correctly detected by the model.

False Negative Rate

ROC 곡선 AUC - ROC gogseon AUC

False Negative Rate (FNR) tells us what proportion of the positive class got incorrectly classified by the classifier.

A higher TPR and a lower FNR is desirable since we want to correctly classify the positive class.

Specificity / True Negative Rate

ROC 곡선 AUC - ROC gogseon AUC

Specificity tells us what proportion of the negative class got correctly classified.

Taking the same example as in Sensitivity, Specificity would mean determining the proportion of healthy people who were correctly identified by the model.

False Positive Rate

ROC 곡선 AUC - ROC gogseon AUC

FPR tells us what proportion of the negative class got incorrectly classified by the classifier.

A higher TNR and a lower FPR is desirable since we want to correctly classify the negative class.

Out of these metrics, Sensitivity and Specificity are perhaps the most important and we will see later on how these are used to build an evaluation metric. But before that, let’s understand why the probability of prediction is better than predicting the target class directly.

Probability of Predictions

A machine learning classification model can be used to predict the actual class of the data point directly or predict its probability of belonging to different classes. The latter gives us more control over the result. We can determine our own threshold to interpret the result of the classifier. This is sometimes more prudent than just building a completely new model!

Setting different thresholds for classifying positive class for data points will inadvertently change the Sensitivity and Specificity of the model. And one of these thresholds will probably give a better result than the others, depending on whether we are aiming to lower the number of False Negatives or False Positives.

Have a look at the table below:

ROC 곡선 AUC - ROC gogseon AUC

The metrics change with the changing threshold values. We can generate different confusion matrices and compare the various metrics that we discussed in the previous section. But that would not be a prudent thing to do. Instead, what we can do is generate a plot between some of these metrics so that we can easily visualize which threshold is giving us a better result.

The AUC-ROC curve solves just that problem!

What is the AUC-ROC curve?

The Receiver Operator Characteristic (ROC) curve is an evaluation metric for binary classification problems. It is a probability curve that plots the TPR against FPR at various threshold values and essentially separates the ‘signal’ from the ‘noise’. The Area Under the Curve (AUC) is the measure of the ability of a classifier to distinguish between classes and is used as a summary of the ROC curve.

The higher the AUC, the better the performance of the model at distinguishing between the positive and negative classes.

ROC 곡선 AUC - ROC gogseon AUC

When AUC = 1, then the classifier is able to perfectly distinguish between all the Positive and the Negative class points correctly. If, however, the AUC had been 0, then the classifier would be predicting all Negatives as Positives, and all Positives as Negatives.

ROC 곡선 AUC - ROC gogseon AUC

When 0.5<AUC<1, there is a high chance that the classifier will be able to distinguish the positive class values from the negative class values. This is so because the classifier is able to detect more numbers of True positives and True negatives than False negatives and False positives.

ROC 곡선 AUC - ROC gogseon AUC

When AUC=0.5, then the classifier is not able to distinguish between Positive and Negative class points. Meaning either the classifier is predicting random class or constant class for all the data points.

So, the higher the AUC value for a classifier, the better its ability to distinguish between positive and negative classes.

How Does the AUC-ROC Curve Work?

In a ROC curve, a higher X-axis value indicates a higher number of False positives than True negatives. While a higher Y-axis value indicates a higher number of True positives than False negatives. So, the choice of the threshold depends on the ability to balance between False positives and False negatives.

Let’s dig a bit deeper and understand how our ROC curve would look like for different threshold values and how the specificity and sensitivity would vary.

ROC 곡선 AUC - ROC gogseon AUC

We can try and understand this graph by generating a confusion matrix for each point corresponding to a threshold and talk about the performance of our classifier:

ROC 곡선 AUC - ROC gogseon AUC

Point A is where the Sensitivity is the highest and Specificity the lowest. This means all the Positive class points are classified correctly and all the Negative class points are classified incorrectly.

In fact, any point on the blue line corresponds to a situation where True Positive Rate is equal to False Positive Rate.

All points above this line correspond to the situation where the proportion of correctly classified points belonging to the Positive class is greater than the proportion of incorrectly classified points belonging to the Negative class.

ROC 곡선 AUC - ROC gogseon AUC

Although Point B has the same Sensitivity as Point A, it has a higher Specificity. Meaning the number of incorrectly Negative class points is lower compared to the previous threshold. This indicates that this threshold is better than the previous one.

ROC 곡선 AUC - ROC gogseon AUC

Between points C and D, the Sensitivity at point C is higher than point D for the same Specificity. This means, for the same number of incorrectly classified Negative class points, the classifier predicted a higher number of Positive class points. Therefore, the threshold at point C is better than point D.

Now, depending on how many incorrectly classified points we want to tolerate for our classifier, we would choose between point B or C for predicting whether you can defeat me in PUBG or not.

“False hopes are more dangerous than fears.”–J.R.R. Tolkein

ROC 곡선 AUC - ROC gogseon AUC

Point E is where the Specificity becomes highest. Meaning there are no False Positives classified by the model. The model can correctly classify all the Negative class points! We would choose this point if our problem was to give perfect song recommendations to our users.

Going by this logic, can you guess where the point corresponding to a perfect classifier would lie on the graph?

Yes! It would be on the top-left corner of the ROC graph corresponding to the coordinate (0, 1) in the cartesian plane. It is here that both, the Sensitivity and Specificity, would be the highest and the classifier would correctly classify all the Positive and Negative class points.

Understanding the AUC-ROC Curve in Python

Now, either we can manually test the Sensitivity and Specificity for every threshold or let sklearn do the job for us. We’re definitely going with the latter!

Let’s create our arbitrary data using the sklearn make_classification method:

Python Code:

I will test the performance of two classifiers on this dataset:

Sklearn has a very potent method roc_curve() which computes the ROC for your classifier in a matter of seconds! It returns the FPR, TPR, and threshold values:

The AUC score can be computed using the roc_auc_score() method of sklearn:

0.9761029411764707 0.9233769727403157

Try this code out in the live coding window below:

We can also plot the ROC curves for the two algorithms using matplotlib:

ROC 곡선 AUC - ROC gogseon AUC

It is evident from the plot that the AUC for the Logistic Regression ROC curve is higher than that for the KNN ROC curve. Therefore, we can say that logistic regression did a better job of classifying the positive class in the dataset.

AUC-ROC for Multi-Class Classification

Like I said before, the AUC-ROC curve is only for binary classification problems. But we can extend it to multiclass classification problems by using the One vs All technique.

So, if we have three classes 0, 1, and 2, the ROC for class 0 will be generated as classifying 0 against not 0, i.e. 1 and 2. The ROC for class 1 will be generated as classifying 1 against not 1, and so on.

The ROC curve for multi-class classification models can be determined as below:

ROC 곡선 AUC - ROC gogseon AUC

End Notes

I hope you found this article useful in understanding how powerful the AUC-ROC curve metric is in measuring the performance of a classifier. You’ll use this a lot in the industry and even in data science or machine learning hackathons. Better get familiar with it!

Going further I would recommend you the following courses that will be useful in building your data science acumen:

  • Introduction to Data Science
  • Applied Machine Learning