Skip to content

Conversation

scxue
Copy link

@scxue scxue commented Dec 18, 2023

fix fail cases for tests

@lawrence-cj
Copy link
Owner

We fixed some bugs for the test error.

@lawrence-cj lawrence-cj merged commit b648ea4 into lawrence-cj:feat/sa-solver Dec 18, 2023
lawrence-cj added a commit that referenced this pull request Dec 23, 2024
* init vila caption

* 111 (#2)

* Feat/enze (#3)

* 111

* 222

* test test

* update vila caption

* add vila code

* update caption code

---------

Co-authored-by: xieenze <[email protected]>
lawrence-cj added a commit that referenced this pull request Dec 23, 2024
* init vila caption

* 111 (#2)

* Feat/enze (#3)

* 111

* 222

* test test

* update vila caption

* add vila code

* update caption code

* update vila stuff

* update

---------

Co-authored-by: junsongc <[email protected]>
lawrence-cj added a commit that referenced this pull request Dec 23, 2024
* init vila caption

* 111 (#2)

* Feat/enze (#3)

* 111

* 222

* test test

* update vila caption

* add vila code

* update caption code

* update vila stuff

* update

* gemma related

* update

* add time vae

* update train

* unrelated commit

* code update

* 1. add RMSNorm code;
2. add qk norm for cross attention;
3. add RMSNorm for y_embedder;
4. code update;
5. config update for y_norm;

* tmp update for train.py

* fix t5 loading

* del unrelated files

* tmp code for norm y & model

* update

* update

* revert model structure(some unrelated nn.Identity Norm)

* fix epoch_eta bug;

(cherry picked from commit 48a2c16)

* update

* add gemma config

* update

* add ldm ae

* update

* add junyu vae

* update

* get_vae code

* remove debug in train

* add config.vae_latent_dim in train.py

* commit

* tqdm optimize in [infer]

* update [infer]

* update vae store code

* update

* update

* update

* update

* update

* update

* add readme

* update

* re-add ldm_ae

* [important] fix the `glumbonv` serious bug. Change `glumbonv` to `glumbconv`;

* make the model structure code more robust.

* update

* update

* update

* update

* update

* update

* update

* 1

* set TOKENIZERS_PARALLELISM false

* update

* update

* optimize cache log

* add parallel linear attn

* add parallel attn ref comments

* update

* update

* update parallel attn

* update

* update

* update text encoder system prompt

* update

* add sys prompt hashid

* update

* update

* add test edit speed code

* add torch.sync code

* add inference for qat

* add 2k config and fix dataset bug

* update

* update

* push 4k config

* add 4k timeshift=5 config

* add feature: dilate conv

* add flux sbatch test scripts;

* update

* update

* tmp code

* [CI-Lint] Fix code style issues with pre-commit 9fc4580380895194e461754b35cb9c904559e4e5

* clean code;
mv slurm script into a folder;

* [CI-Lint] Fix code style issues with pre-commit 9f1aeef955f2b1c23363fc7a00a9cef82bb6091f

* bug fixed caused by merging enze's code;

* mv unused model-block to other scripts;

* [CI-Lint] Fix code style issues with pre-commit de3e66f6f8df2c056571387b2ad864e528bfc926

* mv unused model-block to other scripts;

* code update;

* code update;

* [CI-Lint] Fix code style issues with pre-commit 5b2bac2e501cc6952f5c35fe4ce8fe1b98e6add8

---------

Co-authored-by: xieenze <[email protected]>
Co-authored-by: GitHub Action <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants