You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[QEff Finetune]: Adding steps about how to fine tune on any custom dataset. (#381)
1) Added steps on how to create the custom_dataset.py to run fine-tuning
through QEfficient pipeline on any custom dataset. Also, added a
detailed template for the user which covers how to create
custom_dataset.py
2) Added the argument 'context_length' in the existing APIs which helps
run fine tuning with padding for custom dataset.
3) Made alpaca_dataset as the default dataset.
4) For DDP without sorting, shuffling was set to True. Made it False to
sync it up with single SOC run and also to be able to use 'resume
finetuning from between' feature.
---------
Signed-off-by: Swati Allabadi <[email protected]>
Signed-off-by: Swati Allabadi <[email protected]>
Copy file name to clipboardExpand all lines: docs/source/finetune.md
+41-1Lines changed: 41 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -64,4 +64,44 @@ to visualise the data,
64
64
65
65
```python
66
66
tensorboard --logdir runs/<file>--bind_all
67
-
```
67
+
```
68
+
69
+
## Some features/functionalities of fine-tuning stack:
70
+
1) Gradient accumulation: By default, gradient accumulation happens for 4 steps. To update this value, command line argument gradient_accumulation_steps has to be passed. (Example: '--gradient_accumulation_steps 8')
71
+
2) Gradient Checkpointing: By default, gradient checkpointing is disabled. To enable it, command line argument gradient_accumulation_steps has to be passed.
72
+
73
+
## Fine-Tuning on custom dataset
74
+
75
+
To run fine tuning for any user specific dataset, prepare the dataset using the following steps:
76
+
77
+
1) Create a directory named 'dataset' inside efficient-transformers.
78
+
2) Inside this directory, create a file named 'custom_dataset.py'.
79
+
3) Inside the newly created efficient-transformers/dataset/custom_dataset.py, define a function named 'get_custom_dataset'.
80
+
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length.
81
+
5) Inside get_custom_dataset(), user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
82
+
6) For examples, please refer python files present in [dataset](https://github.com/quic/efficient-transformers/tree/main/QEfficient/finetune/dataset). In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
83
+
7) In [dataset_config.py](https://github.com/quic/efficient-transformers/blob/main/QEfficient/finetune/configs/dataset_config.py), for custom_dataset class, pass the appropriate value for train_split and test_split. As an alternative, these values can be passed as command line arguments as well with the finetune command. For example "--train_split train".
84
+
8) While running fine tuning, pass argument "-–dataset custom_dataset" to finetune on custom dataset.
85
+
86
+
Template for get_custom_dataset() to be defined inside efficient-transformers/dataset/custom_dataset.py is as follows:
0 commit comments