Skip to content

Conversation

@qualiaMachine
Copy link
Collaborator

Finetuning is a critical component of transfer learning, and I think this episode really needs to explore its impact. This added exercise has learners unfreeze one layer of the pretrained model, and compare this to our original frozen model's performance. We see a noticeable improvement with finetuning, as expected. The solution discusses the pros/cons of unfreezing layers, and how to balance overfitting and training time considerations.

@github-actions
Copy link

github-actions bot commented May 15, 2025

Thank you!

Thank you for your pull request 😃

🤖 This automated message can help you check the rendered files in your submission for clarity. If you have any questions, please feel free to open an issue in {sandpaper}.

If you have files that automatically render output (e.g. R Markdown), then you should check for the following:

  • 🎯 correct output
  • 🖼️ correct figures
  • ❓ new warnings
  • ‼️ new errors

Rendered Changes

🔍 Inspect the changes: https://github.com/carpentries-lab/deep-learning-intro/compare/md-outputs..md-outputs-PR-584

The following changes were observed in the rendered markdown documents:

 5-transfer-learning.md               | 114 +++++++++++++++++++++++++++++++++++
 fig/05-frozen_vs_finetuned.png (new) | Bin 0 -> 112734 bytes
 md5sum.txt                           |   2 +-
 3 files changed, 115 insertions(+), 1 deletion(-)
What does this mean?

If you have source files that require output and figures to be generated (e.g. R Markdown), then it is important to make sure the generated figures and output are reproducible.

This output provides a way for you to inspect the output in a diff-friendly manner so that it's easy to see the changes that occur due to new software versions or randomisation.

⏱️ Updated at 2025-05-15 20:12:38 +0000

github-actions bot pushed a commit that referenced this pull request May 15, 2025
Copy link
Collaborator

@carschno carschno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a very valuable addition indeed!
I have added a comment about the added figure.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The image does not look very convincing, both curves end at a similar value for validation accuracy (~0.6). This seems to require some explanation.


```

![](episodes/fig/05-frozen_vs_finetuned.png)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to have alt image descriptions everywhere, e.g.

Suggested change
![](episodes/fig/05-frozen_vs_finetuned.png)
![](episodes/fig/05-frozen_vs_finetuned.png){alt="A comparison of the accuracy on the validation set for both the frozen and the fine-tuned setup."}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants