-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Calculate loss support #1075
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calculate loss support #1075
Conversation
Codecov Report
@@ Coverage Diff @@
## development #1075 +/- ##
===============================================
- Coverage 85.46% 85.44% -0.03%
===============================================
Files 130 130
Lines 10334 10330 -4
===============================================
- Hits 8832 8826 -6
- Misses 1502 1504 +2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good and will make the code so much easier, I added a few comments on parts I don't fully understand right now.
* MAINT cleanup readme and remove old service yaml file (.landscape.yaml) * MAINT bump to dev version * move from fork to spawn * FIX_1061 (automl#1063) * FIX_1061 * Fxi type of target * Moving to classes_ * classes_ should be np.ndarray * Force float before nan * Pynisher context is passed to metafeatures (automl#1076) * Pynisher context to metafeatures * Update test_smbo.py Co-authored-by: Matthias Feurer <[email protected]> * Calculate loss support (automl#1075) * Calculate loss support * Relaxed log loss test for individual models * Feedback from automl#1075 * Missing loss in comment * Revert back test as well * Fix rank for metrics for which greater value is not good (automl#1079) * Enable Mypy in evaluation (except Train Evaluator) (automl#1077) * Almost all files for evaluation * Feedback from PR * Feedback from comments * Solving rebase artifacts * Revert bytes * Automatically update the Copyright when building the html (automl#1074) * update the year automatically * Fixes for new numpy * Revert test * Prepare new release (automl#1081) * prepare new release * fix unit test * bump version number * Fix 1072 (automl#1073) * Improve selector checking * Remove copy error * Rebase changes to development * No .cache and check selector path * Missing params in signature (automl#1084) * Add size check before trying to split for GMeans (automl#732) * Add size check before trying to split * Rebase to new code Co-authored-by: chico <[email protected]> * Fxi broken links in docs and update parallel docs (automl#1088) * Fxi broken links * Feedback from comments * Update manual.rst Co-authored-by: Matthias Feurer <[email protected]> * automl#660 Enable Power Transformations Update (automl#1086) * Power Transformer * Correct typo * ADD_630 * PEP8 compliance * Fix target type Co-authored-by: MaxGreil <[email protected]> * Stale Support (automl#1090) * Stale Support * Enhanced criteria for stale * Enable weekly cron job * test Co-authored-by: Matthias Feurer <[email protected]> Co-authored-by: Matthias Feurer <[email protected]> Co-authored-by: Rohit Agarwal <[email protected]> Co-authored-by: Pepe Berba <[email protected]> Co-authored-by: MaxGreil <[email protected]>
Enables calculate_loss which makes sure that all optimization problems are treated as minimization.
Please notice that calculate_score made it so that any call to functions like log_loss returned the result of a negative version of scikit learn log_loss. I have made it so that we return the score of scikit learn and only modify this score for calculate_loss. This implied changes in a couple of places of the testing code.