You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/features/TEXTUAL_INVERSION.md
+16-7Lines changed: 16 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,21 +18,30 @@ To train, prepare a folder that contains images sized at 512x512 and execute the
18
18
--init_word 'cat'
19
19
```
20
20
21
-
During the training process, files will be created in /logs/[project][time][project]/
22
-
where you can see the process.
21
+
During the training process, files will be created in
22
+
/logs/[project][time][project]/ where you can see the process.
23
23
24
-
Conditioning contains the training prompts
25
-
inputs, reconstruction the input images for the training epoch samples, samples scaled for a sample of the prompt and one with the init word provided.
24
+
Conditioning contains the training prompts inputs, reconstruction the
25
+
input images for the training epoch samples, samples scaled for a
26
+
sample of the prompt and one with the init word provided.
26
27
27
28
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
28
29
29
-
_Note_: According to the associated paper, the optimal number of images is 3-5. Your model may not converge if you use more images than that.
30
+
_Note_: According to the associated paper, the optimal number of
31
+
images is 3-5. Your model may not converge if you use more images than
32
+
that.
30
33
31
-
Training will run indefinitely, but you may wish to stop it before the heat death of the universe, when you find a low loss epoch or around ~5000 iterations.
34
+
Training will run indefinitely, but you may wish to stop it (with
35
+
ctrl-c) before the heat death of the universe, when you find a low
36
+
loss epoch or around ~5000 iterations. Note that you can set a fixed
37
+
limit on the number of training steps by decreasing the "max_steps"
38
+
option in configs/stable_diffusion/v1-finetune.yaml (currently set to
39
+
4000000)
32
40
33
41
**Running**
34
42
35
-
Once the model is trained, specify the trained .pt or .bin file when starting dream using
43
+
Once the model is trained, specify the trained .pt or .bin file when
0 commit comments