Skip to content

Conversation

@TomFinley
Copy link
Contributor

  • Factor stateful logic into a separate internal object.
  • Remove direct usage of Console.Writeline
  • Opportunistic fixes of minor issues.

Fixes #1358 .

@TomFinley TomFinley self-assigned this Oct 30, 2018
Copy link
Contributor

@Zruty0 Zruty0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@sfilipi
Copy link
Member

sfilipi commented Oct 30, 2018

    public void OnlineLinearWorkout()

might be worth to add a test involving CV. I run into this using CV:

#1358

#Resolved


Refers to: test/Microsoft.ML.Tests/TrainerEstimators/OnlineLinearTests.cs:16 in e811fb1. [](commit_id = e811fb1, deletion_comment = False)

@TomFinley
Copy link
Contributor Author

TomFinley commented Oct 30, 2018

    public void OnlineLinearWorkout()

Hi @sfilipi, while it's true that's how the issue was discovered, I kind of feel like the issue of whether IEstimator can be callable multiple times and produce sensible results ought to be handled in a more systematic way than just remembering "also write a CV test" for every ITrainerEstimator implementation.

I think once we actually do switch the CV tests do use the CV API vs. the command line this will just happen naturally.


In reply to: 434448974 [](ancestors = 434448974)


Refers to: test/Microsoft.ML.Tests/TrainerEstimators/OnlineLinearTests.cs:16 in e811fb1. [](commit_id = e811fb1, deletion_comment = False)

@TomFinley TomFinley force-pushed the tfinley/LinearState branch from e811fb1 to 7d713a3 Compare October 30, 2018 21:05
Copy link
Member

@sfilipi sfilipi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

var env = CatalogUtils.GetEnvironment(ctx);
var loss = new TrivialClassificationLossFactory(lossFunction ?? new LogLoss());
return new AveragedPerceptronTrainer(env, label, features, weights, loss, learningRate, decreaseLearningRate, l2RegularizerWeight, numIterations, advancedSettings);
return new AveragedPerceptronTrainer(env, label, features, weights, lossFunction ?? new LogLoss(), learningRate, decreaseLearningRate, l2RegularizerWeight, numIterations, advancedSettings);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LogLoss() [](start = 100, length = 9)

it's LogLoss here, but the default for AP is HingeLoss.

* Factor stateful logic into a separate internal object.
* Remove direct usage of Console.Writeline
* Opportunistic fixes of minor issues.
@TomFinley TomFinley force-pushed the tfinley/LinearState branch from 5ad6f06 to 9c173c8 Compare October 30, 2018 23:49
@TomFinley TomFinley merged commit 696d511 into dotnet:master Oct 31, 2018
@TomFinley TomFinley deleted the tfinley/LinearState branch November 3, 2018 09:10
@ghost ghost locked as resolved and limited conversation to collaborators Mar 27, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

"Can't re-use trainer!" message on CV with AveragePerceptron.

3 participants