8.5. copro.evaluation.evaluate_prediction¶
- copro.evaluation.evaluate_prediction(y_test, y_pred, y_prob, X_test, clf, config)[source]¶
Computes a range of model evaluation metrics and appends the resulting scores to a dictionary. This is done for each model execution separately. Output will be stored to stderr if possible.
- Parameters
y_test (list) – list containing test-sample conflict data.
y_pred (list) – list containing predictions.
y_prob (array) – array resulting probabilties of predictions.
X_test (array) – array containing test-sample variable values.
clf (classifier) – sklearn-classifier used in the simulation.
config (ConfigParser-object) – object containing the parsed configuration-settings of the model.
- Returns
dictionary with scores for each simulation
- Return type
dict