You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/tutorials/beginner/02-transformers-text-classification.md
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ manual_seed(42)
37
37
38
38
## Basic Setup
39
39
40
-
Next we will follow the tutorial and load up our dataset and tokenizer to prepocess the data.
40
+
Next we will follow the tutorial and load up our dataset and tokenizer to preprocess the data.
41
41
42
42
### Data Preprocessing
43
43
@@ -161,7 +161,7 @@ Therefore we will define a `process_function` (called `train_step` below) to do
161
161
* Perform backward pass using loss to calculate gradients for the model parameters.
162
162
* Optimize model parameters using gradients and optimizer.
163
163
164
-
Finally, we choose to return the `loss` so we can utilize it for futher processing.
164
+
Finally, we choose to return the `loss` so we can utilize it for further processing.
165
165
166
166
You will also notice that we do not update the `lr_scheduler` and `progress_bar` in `train_step`. This is because Ignite automatically takes care of it as we will see later in this tutorial.
167
167
@@ -190,9 +190,9 @@ from ignite.engine import Engine
190
190
trainer = Engine(train_step)
191
191
```
192
192
193
-
The `lr_scheduler` we defined perviously was a handler.
193
+
The `lr_scheduler` we defined previously was a handler.
194
194
195
-
[Handlers](https://pytorch-ignite.ai/concepts/02-events-and-handlers/#handlers) can be any type of function (lambda functions, class methods, etc). On top of that, Ignite provides several built-in handlers to reduce redundant code. We attach these handlers to engine which is triggered at a specific [event](https://pytorch-ignite.ai/concepts/02-events-and-handlers/). These events can be anything like the start of an iteration or the end of an epoch. [Here](https://pytorch.org/ignite/generated/ignite.engine.events.Events.html#events) is a complete list of built-in events.
195
+
[Handlers](https://pytorch-ignite.ai/concepts/02-events-and-handlers/#handlers) can be any type of function (lambda functions, class methods, etc.). On top of that, Ignite provides several built-in handlers to reduce redundant code. We attach these handlers to engine which is triggered at a specific [event](https://pytorch-ignite.ai/concepts/02-events-and-handlers/). These events can be anything like the start of an iteration or the end of an epoch. [Here](https://pytorch.org/ignite/generated/ignite.engine.events.Events.html#events) is a complete list of built-in events.
196
196
197
197
Therefore, we will attach the `lr_scheduler` (handler) to the `trainer` (`engine`) via [`add_event_handler()`](https://pytorch.org/ignite/generated/ignite.engine.engine.Engine.html#ignite.engine.engine.Engine.add_event_handler) so it can be triggered at `Events.ITERATION_STARTED` (start of an iteration) automatically.
Now we'll setup a [`EarlyStopping`](https://pytorch.org/ignite/generated/ignite.handlers.early_stopping.EarlyStopping.html#earlystopping) handler for the training process. `EarlyStopping` requires a score_function that allows the user to define whatever criteria to stop trainig. In this case, if the loss of the validation set does not decrease in 2 epochs (`patience`), the training process will stop early.
336
+
Now we'll setup a [`EarlyStopping`](https://pytorch.org/ignite/generated/ignite.handlers.early_stopping.EarlyStopping.html#earlystopping) handler for the training process. `EarlyStopping` requires a score_function that allows the user to define whatever criteria to stop training. In this case, if the loss of the validation set does not decrease in 2 epochs (`patience`), the training process will stop early.
0 commit comments