Skip to content

Conversation

franchuterivera
Copy link
Contributor

Enables calculate_loss which makes sure that all optimization problems are treated as minimization.

Please notice that calculate_score made it so that any call to functions like log_loss returned the result of a negative version of scikit learn log_loss. I have made it so that we return the score of scikit learn and only modify this score for calculate_loss. This implied changes in a couple of places of the testing code.

@codecov
Copy link

codecov bot commented Feb 4, 2021

Codecov Report

Merging #1075 (a5a2ae4) into development (26760aa) will decrease coverage by 0.02%.
The diff coverage is 85.71%.

Impacted file tree graph

@@               Coverage Diff               @@
##           development    #1075      +/-   ##
===============================================
- Coverage        85.46%   85.44%   -0.03%     
===============================================
  Files              130      130              
  Lines            10334    10330       -4     
===============================================
- Hits              8832     8826       -6     
- Misses            1502     1504       +2     
Impacted Files Coverage Δ
autosklearn/ensembles/ensemble_selection.py 67.36% <54.54%> (+0.23%) ⬆️
autosklearn/evaluation/test_evaluator.py 92.59% <66.66%> (+1.68%) ⬆️
autosklearn/evaluation/abstract_evaluator.py 88.57% <80.00%> (-0.23%) ⬇️
autosklearn/ensemble_builder.py 76.65% <90.00%> (-0.45%) ⬇️
autosklearn/__version__.py 100.00% <100.00%> (ø)
autosklearn/data/target_validator.py 97.08% <100.00%> (+0.11%) ⬆️
autosklearn/estimators.py 93.07% <100.00%> (+0.05%) ⬆️
autosklearn/smbo.py 83.83% <100.00%> (+1.92%) ⬆️
...eline/components/feature_preprocessing/fast_ica.py 91.30% <0.00%> (-6.53%) ⬇️
...ipeline/components/regression/gradient_boosting.py 89.42% <0.00%> (-2.89%) ⬇️
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 26760aa...3fffcd3. Read the comment docs.

Copy link
Contributor

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good and will make the code so much easier, I added a few comments on parts I don't fully understand right now.

@mfeurer mfeurer merged commit cf27323 into automl:development Feb 16, 2021
franchuterivera added a commit to franchuterivera/auto-sklearn that referenced this pull request Mar 11, 2021
* MAINT cleanup readme and remove old service yaml file (.landscape.yaml)

* MAINT bump to dev version

* move from fork to spawn

* FIX_1061 (automl#1063)

* FIX_1061

* Fxi type of target

* Moving to classes_

* classes_ should be np.ndarray

* Force float before nan

* Pynisher context is passed to metafeatures (automl#1076)

* Pynisher context to metafeatures

* Update test_smbo.py

Co-authored-by: Matthias Feurer <[email protected]>

* Calculate loss support (automl#1075)

* Calculate loss support

* Relaxed log loss test for individual models

* Feedback from automl#1075

* Missing loss in comment

* Revert back test as well

* Fix rank for metrics for which greater value is not good (automl#1079)

* Enable Mypy in evaluation (except Train Evaluator) (automl#1077)

* Almost all files for evaluation

* Feedback from PR

* Feedback from comments

* Solving rebase artifacts

* Revert bytes

* Automatically update the Copyright when building the html (automl#1074)

* update the year automatically

* Fixes for new numpy

* Revert test

* Prepare new release (automl#1081)

* prepare new release

* fix unit test

* bump version number

* Fix 1072 (automl#1073)

* Improve selector checking

* Remove copy error

* Rebase changes to development

* No .cache and check selector path

* Missing params in signature (automl#1084)

* Add size check before trying to split for GMeans (automl#732)

* Add size check before trying to split

* Rebase to new code

Co-authored-by: chico <[email protected]>

* Fxi broken links in docs and update parallel docs (automl#1088)

* Fxi broken links

* Feedback from comments

* Update manual.rst

Co-authored-by: Matthias Feurer <[email protected]>

* automl#660 Enable Power Transformations Update (automl#1086)

* Power Transformer

* Correct typo

* ADD_630

* PEP8 compliance

* Fix target type

Co-authored-by: MaxGreil <[email protected]>

* Stale Support (automl#1090)

* Stale Support

* Enhanced criteria for stale

* Enable weekly cron job

* test

Co-authored-by: Matthias Feurer <[email protected]>
Co-authored-by: Matthias Feurer <[email protected]>
Co-authored-by: Rohit Agarwal <[email protected]>
Co-authored-by: Pepe Berba <[email protected]>
Co-authored-by: MaxGreil <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants