False Negatives

Published

October 8, 2022

Here is the picture of a confusion matrix we generated after evaluating a model in a dataset with 100 samples:

Assume that class A represents the outcomes of the model we are interested in finding.

What’s the total of False Negatives on this evaluation round?

  1. False Negatives are class A samples the model predicted as class B, so the answer is 7.

  2. False Negatives are class B samples the model predicted as class A, so the answer is 13.

  3. False Negatives are class A samples the model predicted as class A, so the answer is 52.

  4. False Negatives are class B samples the model predicted as class B, so the answer is 28.

1

We are interested in samples from class A which means we will treat class A as our “Positive” samples and class B as our “Negative” samples.

If we replace the classes in the confusion matrix with Positive and Negative instead of “A” and “B,” it’s much easier to reason about the model’s number of False Negatives:

False Negatives are those samples that we expect to be Positive (class A), but the model predicted as Negative (class B.) Therefore, the correct answer to the question is 7. You can see every combination in the image above.

Recommended reading

  • Check “Confusion Matrix” for a full explanation of how a confusion matrix works and how you can use them as part of your work.
  • Check “When accuracy doesn’t help” for an introduction to precision, recall, and f1-score metrics to measure a machine learning model’s performance.