Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 121 additions & 0 deletions NLP/MetaFineTuning/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
## Meta Fine-tuning
Oneflow implement for [Meta Fine-tuning(MFT)](https://aclanthology.org/2020.emnlp-main.250.pdf "Meta Fine-tuning(MFT)") algorithm

---
## MFT:
In the fine-tuning stage of the pre-training language model, the idea of Meta learning is used to learn meta knowledge among multiple similar domains (or tasks), and the pre-training language model is transferred to the common domain space. Simply put, MFT is mainly divided into three stages:
- First obtain the prototypical embedding of each domain through the pre-training language model, and calculate the prototypical score;
- Introduce the idea of domain confrontation, perform meta fine-tuning, and pass data that belong to the same domain (task) through N-way K-Shot method for sampling and mixing; then learn common knowledge between domains (tasks);
- For each specific task in the domain (task), perform standard fine-tuning separately


## Data Acquisition
In this part, the Oneflow static framework will be used to implement the MFT algorithm, and the data will use MR, CR, SST-2 (two classification tasks for sentiment analysis), and the data format is:
> [text]\t[domain/task name]\t[label]

> it 's a stale , overused cocktail using the same olives since 1962 as garnish . SST-2 0

## Data file description

```shell
data
├── k-shot-cross // Represents a mixture of multiple domains or tasks, used for training and verification in the Meta Fine-tuning phase
│   └── g1 // the first group domain/task
│   └── 16-42 // k=16,random seed is 42
│   └── ofrecord // ofrecord file
│   ├── train // training data
│   │   ├── train.of_record-0 // training data
│   │   └── weight.npy // training data prototypical score、
│   ├── dev // development set
│   │   └── dev.of_record-0 // development set
│   ├── train.csv // training data
│   └── dev.csv // development data
└── k-shot-single // Represents a certain domain or task, used for training, verification and testing in the standard Fine-tuning phase
└── SST-2
└── 16-42 // k=16,random seed is 42
└── ofrecord // ofrecord file
├── train // training ser
│   └── train.of_record-0 // training set
├── dev // development set
│   └── dev.of_record-0 // development set
├── test // testing data
│   └── test.of_record-0 // testing data
├── train.csv // training data
├── dev.csv // development data
└── test.scv // testing data
```


## Experiment Settings

#### Step1:Obtaining each training data prototypical score:
Run preprocess.py, on the pre-trained BERT model (uncased_L-12_H-768_A-12_oneflow), get the last layer of BERT hidden vector, and get the prototype embedding of the training set, and the prototypical score of each sample, and save it to disk middle;

```shell
python3 preprocess.py \
--task_name g1 \
--model_load_dir uncased_L-12_H-768_A-12_oneflow \
--data_dir data/k-shot-cross/g1/16-42 \
--num_epochs 4 \
--seed 42 \
--seq_length=128 \
--train_example_num 96 \
--dev_example_num 96 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
--dev_example_num 96 \
--dev_example_num 96 \

process.py文件中好像没有设置这个参数

--vocab_file uncased_L-12_H-768_A-12/vocab.txt \
--resave_ofrecord
```
After execution, the ofrecord directory will be generated under the corresponding `data/k-shot-cross/g1/16-42` directory.

#### Step2:Meta Fine-tuning

Run meta_finetuning.py and fine-tune the cross-domain/task on the pre-trained BERT model (uncased_L-12_H-768_A-12_oneflow):
```shell
python3 meta_finetuning.py \
--task_name g1 \
--model_load_dir uncased_L-12_H-768_A-12_oneflow \
--data_dir data/k-shot-cross/g1/16-42 \
--num_epochs 63 \
--seed 42 \
--seq_length=128 \
--train_example_num 96 \
--dev_example_num 96 \
--batch_size_per_device 4 \
--dev_batch_size_per_device 2 \
--dev_every_step_num 100 \
--vocab_file uncased_L-12_H-768_A-12/vocab.txt \
--learning_rate 5e-5 \
--resave_ofrecord
```
For example, as shown in the figure below, the three domains of SST-2, MR, and CR form the g1 data set, which are all binary classification tasks. Each domain class has 16 samples, and there are 96 samples in total. The training set and the validation set are both 96 For the sample, the highest accuracy rate of the validation set after meta fine-tuning is 78.125%. After fine-tuning, you will get a meta learner (the oneflow model is saved in "output/model_save-2021-06-15-08:54:42/snapshot_best_mft_model_g1_dev_0.78125")

![Meta Fine-tuning](images/meta_fine_tuning.png)


#### Step3:Fine-tuning

Load the model file generated by Step2 (for example, `output/model_save-2021-06-15-08:54:42/snapshot_best_mft_model_g1_dev_0.7083333333333334`), and perform standard fine-tuning

```shell
python3 finetuning.py \
--task_name sst-2 \
--model_load_dir output/model_save-2021-06-15-08:54:42/snapshot_best_mft_model_g1_dev_0.7083333333333334 \
--data_dir data/k-shot-single/SST-2/16-42 \
--num_epochs 64 \
--seed 42 \
--seq_length=128 \
--train_example_num 32 \
--dev_example_num 32 \
--eval_example_num 872 \
--batch_size_per_device 2 \
--dev_batch_size_per_device 2 \
--eval_batch_size_per_device 2 \
--dev_every_step_num 50 \
--vocab_file uncased_L-12_H-768_A-12/vocab.txt \
--learning_rate 1e-5 \
--resave_ofrecord
```

For example, as shown in the figure below, select the model snapshot_best_mft_model_g1_dev_0.78125, select the SST-2 data set (32 samples in the training set, 872 samples in the test set), and then perform standard fine-tuning. The highest verification set accuracy rate is 68.75%. After saving the corresponding model, The accuracy rate on the test set is 87.50%.
![Fine-tuning](images/fine_tuning.png)


Loading