Skip to content

Conversation

@jeremy-myers
Copy link
Contributor

@jeremy-myers jeremy-myers commented Sep 20, 2023


📚 Documentation preview 📚: https://pyttb--253.org.readthedocs.build/en/253/

@jeremy-myers jeremy-myers linked an issue Sep 20, 2023 that may be closed by this pull request
@dmdunla
Copy link
Collaborator

dmdunla commented Sep 26, 2023

This is still breaking due to the docs failing when the LBFGS line search fails:

Bad direction in the line search;
   refresh the lbfgs memory and restart the iteration.

I made the problem smaller and that did not help:

shape = (5, 6, 7)

I initialized with the solution, M_true, and that did not help:

# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_true
)

I created an initial guess of all ones and that did help on the smaller problem:

shape = (5, 6, 7)
...
M_true = ttb.ktensor(U).normalize()
M_init = ttb.ktensor.from_function(np.ones,shape,rank)
...
# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_init
)
...

@dmdunla
Copy link
Collaborator

dmdunla commented Sep 26, 2023

Why was the matplotlib code commented out? Was there a problem?

@jeremy-myers
Copy link
Contributor Author

Why was the matplotlib code commented out? Was there a problem?

Yes, some of the regression tests were failing.

@jeremy-myers
Copy link
Contributor Author

This is still breaking due to the docs failing when the LBFGS line search fails:

Bad direction in the line search;
   refresh the lbfgs memory and restart the iteration.

I made the problem smaller and that did not help:

shape = (5, 6, 7)

I initialized with the solution, M_true, and that did not help:

# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_true
)

I created an initial guess of all ones and that did help on the smaller problem:

shape = (5, 6, 7)
...
M_true = ttb.ktensor(U).normalize()
M_init = ttb.ktensor.from_function(np.ones,shape,rank)
...
# Compute rank-3 GCP approximation to X with GCP-OPT
result_lbfgs, initial_guess, info_lbfgs = ttb.gcp_opt(
    data=X, rank=rank, objective=objective, optimizer=optimizer, init=M_init
)
...

I'll incorporate this and commit again

@dmdunla
Copy link
Collaborator

dmdunla commented Sep 26, 2023

@jeremy-myers Please add an Issue related to the Rayleigh loss that fails with LBFGS when a random start or starting with M_true.

@dmdunla dmdunla merged commit d6cdc2f into sandialabs:main Sep 27, 2023
@jeremy-myers jeremy-myers deleted the 230-tutorial-gcp_opt branch September 28, 2023 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Tutorial: gcp_opt()

3 participants