Skip to content

Commit e0ed181

Browse files
authored
Remove tutorials (#156)
* Delete tutorialposts directory * Delete getting_started.md * Update navbar.html * restore files with external links * rm Discourse + Stack Overflow, add Ecosystem * Update _layout/navbar.html * {{redirect}} * {{redirect}} 2 * {{redirect}} 3 * refresh
1 parent 3469eb6 commit e0ed181

9 files changed

+32
-1956
lines changed

_layout/navbar.html

+20-98
Original file line numberDiff line numberDiff line change
@@ -1,117 +1,39 @@
11
<!-- Navbar -->
22
<nav class="navbar navbar-expand-lg navbar-dark container lighter">
3-
{{ispage index || ispage 404}}
4-
<a class="navbar-brand" href="./">
3+
<a class="navbar-brand" href="/">
54
<div class="logo" style="font-size:30pt;margin-top:-15px;margin-bottom:-10px;">flux</div>
65
</a>
7-
{{end}}
8-
9-
{{ispage blogposts/* || ispage tutorialposts/*}}
10-
<a class="navbar-brand" href="../../">
11-
<div class="logo" style="font-size:30pt;margin-top:-15px;margin-bottom:-10px;">flux</div>
12-
</a>
13-
{{end}}
14-
15-
{{ispage getting_started || ispage blog || ispage governance || ispage gsoc || ispage tutorials}}
16-
<a class="navbar-brand" href="../">
17-
<div class="logo" style="font-size:30pt;margin-top:-15px;margin-bottom:-10px;">flux</div>
18-
</a>
19-
{{end}}
206
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
217
<span class="navbar-toggler-icon"></span>
228
</button>
239

2410
<div class="collapse navbar-collapse" id="navbarSupportedContent">
2511
<ul class="navbar-nav mr-auto">
26-
{{ispage index || ispage 404}}
27-
<li class="nav-item">
28-
<a class="nav-link" href="./getting_started/">Getting Started</a>
29-
</li>
30-
<li class="nav-item">
31-
<a class="nav-link" href="https://fluxml.ai/Flux.jl/" target="_blank">Docs</a>
32-
</li>
33-
<li class="nav-item">
34-
<a class="nav-link" href="./blog/">Blog</a>
35-
</li>
36-
<li class="nav-item">
37-
<a class="nav-link" href="./tutorials/">Tutorials</a>
38-
</li>
39-
<li class="nav-item">
40-
<a class="nav-link" href="https://fluxml.ai/Flux.jl/dev/ecosystem/">Ecosystem</a>
41-
</li>
42-
<!-- <li class="nav-item">
43-
<a class="nav-link" href="./gsoc/">GSoC</a>
44-
</li>
45-
<li class="nav-item">
46-
<a class="nav-link" href="./gsod/">GSoD</a>
47-
</li>
48-
<li>
49-
<a class="nav-link" href="./governance/">Governance</a>
50-
</li> -->
51-
{{end}}
52-
53-
{{ispage blogposts/* || ispage tutorialposts/*}}
54-
<li class="nav-item">
55-
<a class="nav-link" href="../../getting_started/">Getting Started</a>
56-
</li>
57-
<li class="nav-item">
58-
<a class="nav-link" href="https://fluxml.ai/Flux.jl/" target="_blank">Docs</a>
59-
</li>
60-
<li class="nav-item">
61-
<a class="nav-link" href="../../blog/">Blog</a>
62-
</li>
63-
<li class="nav-item">
64-
<a class="nav-link" href="../../tutorials/">Tutorials</a>
65-
</li>
66-
<li class="nav-item">
67-
<a class="nav-link" href="https://fluxml.ai/Flux.jl/dev/ecosystem/">Ecosystem</a>
68-
</li>
69-
<!-- <li class="nav-item">
70-
<a class="nav-link" href="../../gsoc/">GSoC</a>
71-
</li>
72-
<li class="nav-item">
73-
<a class="nav-link" href="../../gsod/">GSoD</a>
74-
</li>
75-
<li>
76-
<a class="nav-link" href="../../governance/">Governance</a>
77-
</li> -->
78-
{{end}}
79-
80-
{{ispage getting_started || ispage blog || ispage governance || ispage gsoc || ispage tutorials}}
81-
<li class="nav-item">
82-
<a class="nav-link" href="../getting_started/">Getting Started</a>
83-
</li>
84-
<li class="nav-item">
85-
<a class="nav-link" href="https://fluxml.ai/Flux.jl/" target="_blank">Docs</a>
86-
</li>
87-
<li class="nav-item">
88-
<a class="nav-link" href="../blog/">Blog</a>
89-
</li>
90-
<li class="nav-item">
91-
<a class="nav-link" href="../tutorials/">Tutorials</a>
92-
</li>
93-
<li class="nav-item">
94-
<a class="nav-link" href="https://fluxml.ai/Flux.jl/dev/ecosystem/">Ecosystem</a>
95-
</li>
96-
<!-- <li class="nav-item">
97-
<a class="nav-link" href="../gsoc/">GSoC</a>
98-
</li>
99-
<li class="nav-item">
100-
<a class="nav-link" href="../gsod/">GSoD</a>
101-
</li>
102-
<li>
103-
<a class="nav-link" href="../governance/">Governance</a>
104-
</li> -->
105-
{{end}}
106-
10712
<li class="nav-item">
108-
<a class="nav-link" href="https://discourse.julialang.org/c/domain/ML" target="_blank">Discuss</a>
13+
<a class="nav-link" href="https://fluxml.ai/Flux.jl/" target="_blank">Documentation</a>
14+
</li>
15+
<li class="nav-item">
16+
<a class="nav-link" href="https://github.com/FluxML/model-zoo/" target="_blank">Model Zoo</a>
10917
</li>
11018
<li class="nav-item">
11119
<a class="nav-link" href="https://github.com/FluxML/Flux.jl" target="_blank">GitHub</a>
11220
</li>
11321
<li class="nav-item">
114-
<a class="nav-link" href="https://stackoverflow.com/questions/tagged/flux.jl" target="_blank">Stack Overflow</a>
22+
<a class="nav-link" href="https://fluxml.ai/Flux.jl/stable/ecosystem/">Ecosystem</a>
23+
</li>
24+
<!--
25+
<li class="nav-item">
26+
<a class="nav-link" href="/gsoc/">GSoC</a>
27+
</li>
28+
<li class="nav-item">
29+
<a class="nav-link" href="/gsod/">GSoD</a>
30+
</li>
31+
-->
32+
<li class="nav-item">
33+
<a class="nav-link" href="/blog/">Blog</a>
34+
</li>
35+
<li>
36+
<a class="nav-link" href="/governance/">Governance</a>
11537
</li>
11638
<li class="nav-item">
11739
<a class="nav-link" href="https://github.com/FluxML/Flux.jl/blob/master/CONTRIBUTING.md" target="_blank">Contribute</a>

getting_started.md

+1-210
Original file line numberDiff line numberDiff line change
@@ -1,213 +1,4 @@
11
+++
22
title = "Getting Started"
3+
external = "http://fluxml.ai/Flux.jl/stable/models/overview/"
34
+++
4-
5-
Welcome! This section contains information on how to create your first machine learning model using Flux.
6-
7-
Flux is 100% pure-Julia stack and provides lightweight abstractions on top of Julia's native GPU and AD support. It makes the easy things easy while remaining fully hackable. Also, Flux has a next-generation Automatic Differentiation (AD) system [Zygote](https://github.com/FluxML/Zygote.jl).
8-
9-
10-
## Before you start
11-
12-
Before you begin using Flux, you need to install Julia version 1.3 or later. For more information on installing Julia, see [Download Julia](https://julialang.org/downloads/).
13-
14-
After installing Julia, you can install Flux by running the following command in the Julia REPL:
15-
16-
```julia
17-
julia> ] add Flux
18-
```
19-
20-
Alternatively, you can run the following:
21-
22-
```julia
23-
julia> using Pkg; Pkg.add("Flux")
24-
```
25-
26-
Flux provides GPU support. For more information on obtaining GPU support, see [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) and [Flux documentation on GPU support](https://fluxml.ai/Flux.jl/stable/gpu/).
27-
28-
## Getting Help
29-
30-
If you run into any issues on your journey learning Flux.jl, please post on Stack Overflow under the [Flux.jl tag](https://stackoverflow.com/questions/tagged/flux.jl) or ask a question on the [Julia Discourse under the Machine Learning domain](https://discourse.julialang.org/c/domain/ml/).
31-
32-
## Create your first model
33-
34-
In this tutorial, you'll create your first machine learning model using Flux. This is a simple linear regression model that attempts to recover a linear function by looking at noisy examples.
35-
36-
### Step 1: Import Flux
37-
38-
To import Flux add the following:
39-
40-
```julia
41-
using Flux
42-
```
43-
44-
### Step 2: Create the training data
45-
First, we'll write a function that generates our "true" data. We'll use to use Flux to recover `W_truth` and `b_truth` by looking only at examples of the `ground_truth` function.
46-
47-
```julia
48-
W_truth = [1 2 3 4 5;
49-
5 4 3 2 1]
50-
b_truth = [-1.0; -2.0]
51-
ground_truth(x) = W_truth*x .+ b_truth
52-
```
53-
54-
Next, we generate our training data by passing random vectors into the ground truth function. We'll also add Gaussian noise using `randn()` so that it's not *too* easy for Flux to figure out the model.
55-
56-
```julia
57-
x_train = [ 5 .* rand(5) for _ in 1:10_000 ]
58-
y_train = [ ground_truth(x) + 0.2 .* randn(2) for x in x_train ]
59-
```
60-
61-
There are two important things to note in this example which differ from real
62-
machine learning problems:
63-
- Our variables are individual vectors, stored inside another vector. Usually,
64-
we would have a collection of N-dimensional arrays (N >= 2) as our data.
65-
- In a real learning scenario, we would not have access to our ground truth,
66-
only the training examples.
67-
68-
### Step 3: Define your model
69-
70-
Next, we define the model we want to use to learn the data. We'll use the same form that we used for our training data:
71-
72-
```julia
73-
model(x) = W*x .+ b
74-
```
75-
76-
We need to set the parameters of the model (`W` and `b`) to some initial values. It's fairly common to use random values, so we'll do that:
77-
78-
```julia
79-
W = rand(2, 5)
80-
b = rand(2)
81-
```
82-
83-
You can learn more about defining models in this video:
84-
85-
~~~
86-
<div style="display: flex; justify-content: center;">
87-
<iframe style="width: 60%; height:400px;" src="https://www.youtube.com/embed/XrAUGRX998E" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
88-
</div>
89-
~~~
90-
91-
### Step 4: Define a loss function
92-
93-
A loss function evaluates a machine learning model's performance. In other words, it measures how far the model is from its target prediction. Flux lets you define your own custom loss function, or you can use one of the [Loss Functions](https://fluxml.ai/Flux.jl/stable/training/training/#Loss-Functions-1) that Flux provides.
94-
95-
For this example, we'll define a loss function that measures the squared distance from the predicted output to the actual output:
96-
97-
```julia
98-
function loss(x, y)
99-
ŷ = model(x)
100-
sum((y .- ŷ).^2)
101-
end
102-
```
103-
104-
### Step 5: Set an optimiser
105-
106-
You train a machine learning model by running an optimization algorithm (optimiser) that finds the best parameters (`W` and `b`). The best parameters for a model are the ones that achieve the best score of the `loss` function. Flux provides [Optimisers](https://fluxml.ai/Flux.jl/stable/training/optimisers/) that you can use to train a model.
107-
108-
For this tutorial, we'll use a classic gradient descent optimiser with learning rate η = 0.01:
109-
110-
```julia
111-
opt = Descent(0.01)
112-
```
113-
114-
### Step 6: Train your model
115-
116-
Training a model is the process of computing the gradients with respect to the parameters for each input in the data. At every step, the optimiser updates all of the parameters until it finds a good value for them. This process can be written as a loop: we iterate over the examples in `x_train` and `y_train` and update the model for each example.
117-
118-
To indicate that we want all derivatives of `W` and `b`, we write `ps = Flux.params(W, b)`. This is a convenience function that Flux provides so that we don't have to explicitly list every gradient we want. Check out the section on [Taking Gradients](https://fluxml.ai/Flux.jl/stable/models/basics/#Taking-Gradients) if you want to learn more about how this works.
119-
120-
We can now execute the training procedure for our model:
121-
122-
```julia
123-
train_data = zip(x_train, y_train)
124-
ps = Flux.params(W, b)
125-
126-
for (x,y) in train_data
127-
gs = Flux.gradient(ps) do
128-
loss(x,y)
129-
end
130-
Flux.Optimise.update!(opt, ps, gs)
131-
end
132-
```
133-
134-
> **Note:** With this pattern, it is easy to add more complex learning routines that make use of control flow, distributed compute, scheduling optimisations, etc. Note that the pattern above is a simple Julia *for loop* but it could also be replaced with a *while loop*.
135-
136-
While writing your own loop is powerful, sometimes you just want to do the simple thing without writing too much code. Flux lets you do this with [Flux.train!](https://fluxml.ai/Flux.jl/stable/training/training/#Training-1), which runs one training epoch over a dataset. `Flux.train!` computes gradients and updates model parameters for every sample or batch of samples. In our case, we could have replaced the above loop with the following statement:
137-
138-
```julia
139-
Flux.train!(loss, Flux.params(W, b), train_data, opt)
140-
```
141-
142-
For more ways to train a model in Flux, see [Training](https://fluxml.ai/Flux.jl/stable/training/training/#Training-1).
143-
144-
### Step 7: Examine the Results
145-
146-
The training loop we ran modified `W` and `b` to be closer to the values used to generate the training data (`W` and `b`). We can see how well we did by printing out the difference between the learned and actual matrices.
147-
148-
```julia
149-
@show W
150-
@show maximum(abs, W .- W_truth)
151-
```
152-
153-
Because the data and initialization are random, your results may vary slightly, but in most cases, the largest difference between the elements of learned and actual `W` matrix is no more than 4%.
154-
155-
### Step 8: Run the script
156-
157-
Finally, create a file with extension `.jl` with the code above in any IDE and run it as `julia name-of-your-file.jl `. You can use the [Julia VSCode extension](https://www.julia-vscode.org/) to edit and run Julia code. Alternatively, you can run Julia code on a Jupyter notebook (see [IJulia](https://github.com/JuliaLang/IJulia.jl)). Here is the full version of the code:
158-
159-
```julia
160-
using Flux
161-
162-
# Define the ground truth model. We aim to recover W_truth and b_truth using
163-
# only examples of ground_truth()
164-
W_truth = [1 2 3 4 5;
165-
5 4 3 2 1]
166-
b_truth = [-1.0; -2.0]
167-
ground_truth(x) = W_truth*x .+ b_truth
168-
169-
# Generate the ground truth training data as vectors-of-vectors
170-
x_train = [ 5 .* rand(5) for _ in 1:10_000 ]
171-
y_train = [ ground_truth(x) + 0.2 .* randn(2) for x in x_train ]
172-
173-
# Define and initialize the model we want to train
174-
model(x) = W*x .+ b
175-
W = rand(2, 5)
176-
b = rand(2)
177-
178-
# Define pieces we need to train: loss function, optimiser, examples, and params
179-
function loss(x, y)
180-
ŷ = model(x)
181-
sum((y .- ŷ).^2)
182-
end
183-
opt = Descent(0.01)
184-
train_data = zip(x_train, y_train)
185-
ps = Flux.params(W, b)
186-
187-
# Execute a training epoch
188-
for (x,y) in train_data
189-
gs = gradient(ps) do
190-
loss(x,y)
191-
end
192-
Flux.Optimise.update!(opt, ps, gs)
193-
end
194-
195-
# An alternate way to execute a training epoch
196-
# Flux.train!(loss, Flux.params(W, b), train_data, opt)
197-
198-
# Print out how well we did
199-
@show W
200-
@show maximum(abs, W .- W_truth)
201-
```
202-
203-
204-
## What's next
205-
206-
Congratulations! You have created and trained a model using Flux. Now, you can continue exploring Flux's capabilities:
207-
208-
* [60-minute blitz tutorial](tutorials/2020/09/15/deep-learning-flux.html) is a quick intro to Flux loosely based on [PyTorch's tutorial](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html).
209-
* [Flux Model Zoo](https://github.com/FluxML/model-zoo) contains various demonstrations of Flux.
210-
* [JuliaAcademy](https://juliaacademy.com/) offers introductory courses to Julia and Flux.
211-
* [Flux's official documentation](https://fluxml.ai/Flux.jl/stable/).
212-
213-
As you continue to progress through your Flux and Julia journey, please feel free to share it on [Twitter and tag us](https://twitter.com/FluxML), we would love to see what awesome things the #FluxML community is up to.

0 commit comments

Comments
 (0)