Skip to content
Discussion options

You must be logged in to vote

I see, this indeed explains the behavior I observed with MultiTaskGP optimization. Have you accounted for potential methods to overcome this issue, or was this never the problem for your tests?

We're using transfer learning under the assumption that the target and source tasks are nicely correlated. If I don't know anything else about the target task and I can't assume that they're well correlated, there isn't anything to transfer. So, MultiTaskGP is a bit of a gamble that when we don't have much information about the target task, the source task will help us generate good candidates.

But then what happens if they're actually not that related? Before generating candidates, we can evalua…

Replies: 1 comment 5 replies

Comment options

You must be logged in to vote
5 replies
@angyurchenko
Comment options

@saitcakmak
Comment options

@angyurchenko
Comment options

@saitcakmak
Comment options

Answer selected by angyurchenko
@angyurchenko
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants