### PR曲线的AUC和AP/mAP

#### 结论

sklearn.metrics.average_precision_score(y_true, y_score, **, average=’macro’, pos_label=1, sample_weight=None*)[source]

Compute average precision (AP) from prediction scores.

AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight:

where Pn and Rn are the precision and recall at the nth threshold [1]. This implementation is not interpolated and is different from computing the area under the precision-recall curve with the trapezoidal rule, which uses linear interpolation and can be too optimistic.

Note: this implementation is restricted to the binary classification task or multilabel classification task.

Read more in the User Guide.

#### 源码

AUC-PR 用的trapz直接求积分，就是(precision k - precision k-1)/2 * (recall k - recall k-1)

AP-PR 用的 用的是 precision k * (recall k - recall k-1)

#### Example：

##### Use average_precision_score

The average precision (PR AUC) is returned by passing the true label & the probability estimate.

##### Use precision_recall_curve&auc

When using auc function to compute the area under the precision-recall curve, as mentioned earlier, the result is not the same as the value from average_precision_score, but it does not differ too much since the number of data points are large enough to mitigate the effect of wiggles.

##### Use build-in function to plot precision-recall curve

In version 0.22.0 of scikit learn, plot_precision_recall_curve is added into the metrics module. It is easy to plot the precision-recall curve with sufficient information by using the classifier without any extra steps to generate the prediction of probability,

If you need to compute the area under the curve of precision-recall plot, don’t forget to use average_precision_score to help you get robust result quickly.

0%