Skip to content

Commit cd33b1a

Browse files
committed
let's replace explicit printing with showprogress macro, it's pretty and doesn't waste lines
1 parent c0994c7 commit cd33b1a

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

docs/src/models/quickstart.md

+10-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ If you haven't, then you might prefer the [Fitting a Straight Line](overview.md)
66

77
```julia
88
# With Julia 1.7+, this will prompt if neccessary to install everything, including CUDA:
9-
using Flux, Statistics
9+
using Flux, Statistics, ProgressMeter
1010

1111
# Generate some data for the XOR problem: vectors of length 2, as columns of a matrix:
1212
noisy = rand(Float32, 2, 1000) # 2×1000 Matrix{Float32}
@@ -32,7 +32,7 @@ opt = Flux.Adam(0.01) # will store optimiser momentum, etc.
3232

3333
# Training loop, using the whole data set 1000 times:
3434
losses = []
35-
for epoch in 1:1_000
35+
@showprogress for epoch in 1:1_000
3636
for (x, y) in loader
3737
loss, grad = Flux.withgradient(pars) do
3838
# Evaluate model and loss inside gradient context:
@@ -63,6 +63,14 @@ p_done = scatter(noisy[1,:], noisy[2,:], zcolor=out2[1,:], title="Trained networ
6363
plot(p_true, p_raw, p_done, layout=(1,3), size=(1000,330))
6464
```
6565

66+
Here's the loss during training:
67+
68+
```julia
69+
plot(losses; xaxis=(:log10, "iteration"), yaxis="loss", label="per batch")
70+
n = length(loader)
71+
plot!(n:n:length(losses), mean.(Iterators.partition(losses, n)), label="epoch mean")
72+
```
73+
6674
This XOR ("exclusive or") problem is a variant of the famous one which drove Minsky and Papert to invent deep neural networks in 1969. For small values of "deep" -- this has one hidden layer, while earlier perceptrons had none. (What they call a hidden layer, Flux calls the output of the first layer, `model[1](noisy)`.)
6775

6876
Since then things have developed a little.

0 commit comments

Comments
 (0)