From 0a27e6a00eab472ac94cd6f397ad1380cc2f52d8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dr=2E=20Artificial=E6=9B=BE=E5=B0=8F=E5=81=A5?= <875100501@qq.com> Date: Mon, 5 Aug 2024 20:45:42 +0800 Subject: [PATCH] fix the wrong words fix the wrong words --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7923e211..6907c20d 100644 --- a/README.md +++ b/README.md @@ -83,7 +83,7 @@ NLVR2 | Gradient checkpoint can also be activated in the config file to reduce GPU memory usage. ### Pre-train: -1. Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}. +1. Prepare training json files where each json file contains a list. Each item in the list is a dictionary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}. 2. In configs/pretrain.yaml, set 'train_file' as the paths for the json files . 3. Pre-train the model using 8 A100 GPUs:
python -m torch.distributed.run --nproc_per_node=8 pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain