You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/models/overview.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -77,9 +77,9 @@ julia> predict(x_train)
77
77
In order to make better predictions, you'll need to provide a *loss function* to tell Flux how to objectively *evaluate* the quality of a prediction. Loss functions compute the cumulative distance between actual values and predictions.
@@ -131,7 +131,7 @@ The first parameter is the weight and the second is the bias. Flux will adjust p
131
131
This optimiser implements the classic gradient descent strategy. Now improve the parameters of the model with a call to [`Flux.train!`](@ref) like this:
132
132
133
133
```jldoctest overview
134
-
julia> train!(loss, parameters, data, opt)
134
+
julia> train!(loss, predict, data, opt)
135
135
```
136
136
137
137
And check the loss:
@@ -156,10 +156,10 @@ In the previous section, we made a single call to `train!` which iterates over t
@@ -188,7 +188,7 @@ First, we gathered real-world data into the variables `x_train`, `y_train`, `x_t
188
188
189
189
Then, we built a single input, single output predictive model, `predict = Dense(1 => 1)`. The initial predictions weren't accurate, because we had not trained the model yet.
190
190
191
-
After building the model, we trained it with `train!(loss, parameters, data, opt)`. The loss function is first, followed by the `parameters` holding the weights and biases of the model, the training data, and the `Descent` optimizer provided by Flux. We ran the training step once, and observed that the parameters changed and the loss went down. Then, we ran the `train!` many times to finish the training process.
191
+
After building the model, we trained it with `train!(loss, predict, data, opt)`. The loss function is first, followed by the model itself, the training data, and the `Descent` optimizer provided by Flux. We ran the training step once, and observed that the parameters changed and the loss went down. Then, we ran the `train!` many times to finish the training process.
192
192
193
193
After we trained the model, we verified it with the test data to verify the results.
0 commit comments