This component computes classification performance metrics across varying threshold levels for a binary classifier's predicted probabilities.
Given a table containing predicted probabilities and true class labels, the component iteratively evaluates the following metrics at each threshold from 0 to 1 (configurable step size):
Precision
Recall
F1 Score
Accuracy
It outputs a table mapping each threshold to its corresponding metric values. This enables model evaluation under varying decision thresholds, which is especially useful for applications where class imbalance or cost-sensitive classification exists.