Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tool for evaluating target performance using a comparative table #58

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

infingardi
Copy link

What was changed

Added function targets_performance_comparative_table in _bibmon_tools.py

Objective

Evaluates the performance of all dependent variables (targets) in a regression model by computing key performance metrics:

  • R² score: Measures how well the predictions align with actual data.
  • Mean Absolute Error (MAE): Quantifies the average prediction error.
  • False Alarm Rate (FAR): Represents the frequency of false positives.
  • Fault Detection Rate (FDR): Indicates the accuracy in identifying true faults.

Importance

Selecting the best target variable is essential for improving regression model performance. It ensures more accurate predictions (higher R², lower MAE), better fault detection (higher FDR), and fewer false positives (lower FAR). This helps optimize model reliability, reduces noise from false alarms, and directs resources effectively, making the system more efficient and robust in practical applications.

Benefits

Quick way to find the best variable (target) to be considered in the Y set.

How to use

tab_pred, tab_detec = targets_performance_comparative_table(dataFault, 
                                         start_train=start_train,
                                         end_train=end_train, 
                                         end_validation=end_validation,
                                         end_test=end_test,
                                         tags= allTags,              
                                         model=model,
                                         metrics=mtr, 
                                         count_window_size=3, 
                                         count_limit=2,
                                         fault_start=fault_start,
                                         fault_end=fault_end)

# Print Prediction table
tab_pred

# Print Detection table
tab_detec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants