Skip to content

Commit d074798

Browse files
committed
More on backprop
1 parent b6cc353 commit d074798

File tree

3 files changed

+174
-27
lines changed

3 files changed

+174
-27
lines changed

02. Regression/7. R Squared.ipynb

+17-14
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# $R^2$ Intuition"
7+
"## $R^2$ Intuition"
88
]
99
},
1010
{
@@ -20,16 +20,16 @@
2020
"R Squared is defined as\n",
2121
"$$ R^2 = 1 - \\frac{SS_{res}}{SS_{tot}} $$\n",
2222
"\n",
23-
"#### So the $R^2$ basically depicts how different your model is from average model, if your model is equal to average model, the $R^2$ is 0 which is bad, but if it is accurate one, the $SS_{res}$ will be lower and $\\frac{SS_{res}}{SS_{tot}}$ will be lower, which means the $R^2$ will be higher for an accurate model \n",
23+
"__So the $R^2$ basically depicts how different your model is from average model, if your model is equal to average model, the $R^2$ is 0 which is bad, but if it is accurate one, the $SS_{res}$ will be lower and $\\frac{SS_{res}}{SS_{tot}}$ will be lower, which means the $R^2$ will be higher for an accurate model__\n",
2424
"\n",
25-
"#### Note that $R^2$ can also be negative. This occurs when your model is even worse than the average model"
25+
"__ Note that $R^2$ can also be negative. This occurs when your model is even worse than the average model__"
2626
]
2727
},
2828
{
2929
"cell_type": "markdown",
3030
"metadata": {},
3131
"source": [
32-
"# Adjusted $R^2$\n",
32+
"## Adjusted $R^2$\n",
3333
"\n",
3434
"### Problem with $R^2$\n",
3535
"\n",
@@ -54,26 +54,29 @@
5454
"cell_type": "markdown",
5555
"metadata": {},
5656
"source": [
57-
"# Interpreting coefficients\n",
57+
"## Interpreting coefficients\n",
5858
"\n",
5959
"Just because the coefficient of a variable is high, it doesn't mean it is more corelated. We should look at the units while interpreting coefficient. Best way to do it is look at the change for a unit change. For instance, if the coefficient is 0.79 we can say, for a unit change i.e. for an additional dollar added into the column, the profit will increase by 79 cents"
6060
]
61-
},
62-
{
63-
"cell_type": "code",
64-
"execution_count": null,
65-
"metadata": {
66-
"collapsed": true
67-
},
68-
"outputs": [],
69-
"source": []
7061
}
7162
],
7263
"metadata": {
7364
"kernelspec": {
7465
"display_name": "Python [conda root]",
7566
"language": "python",
7667
"name": "conda-root-py"
68+
},
69+
"language_info": {
70+
"codemirror_mode": {
71+
"name": "ipython",
72+
"version": 2
73+
},
74+
"file_extension": ".py",
75+
"mimetype": "text/x-python",
76+
"name": "python",
77+
"nbconvert_exporter": "python",
78+
"pygments_lexer": "ipython2",
79+
"version": "2.7.12"
7780
}
7881
},
7982
"nbformat": 4,

05. Model Evaluation/R Squared.ipynb

+81
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# $R^2$ Intuition"
8+
]
9+
},
10+
{
11+
"cell_type": "markdown",
12+
"metadata": {},
13+
"source": [
14+
"For a given model, the sum of squared errors is calculated as\n",
15+
"$$ SS_{res} = \\sum_{i=0}^n (y_i - \\hat{y_i})^2 $$\n",
16+
"\n",
17+
"For a model where output is always the average value of $y$ is\n",
18+
"$$ SS_{tot} = \\sum_{i=0}^n (y_i - y_{avg})^2 $$\n",
19+
"\n",
20+
"R Squared is defined as\n",
21+
"$$ R^2 = 1 - \\frac{SS_{res}}{SS_{tot}} $$\n",
22+
"\n",
23+
"#### So the $R^2$ basically depicts how different your model is from average model, if your model is equal to average model, the $R^2$ is 0 which is bad, but if it is accurate one, the $SS_{res}$ will be lower and $\\frac{SS_{res}}{SS_{tot}}$ will be lower, which means the $R^2$ will be higher for an accurate model \n",
24+
"\n",
25+
"#### Note that $R^2$ can also be negative. This occurs when your model is even worse than the average model"
26+
]
27+
},
28+
{
29+
"cell_type": "markdown",
30+
"metadata": {},
31+
"source": [
32+
"# Adjusted $R^2$\n",
33+
"\n",
34+
"### Problem with $R^2$\n",
35+
"\n",
36+
"Hypothesis: $R^2$ will never decrease\n",
37+
"\n",
38+
"When you have a model with $n$ variables, the model will try to minimise the error. When you add $n + 1$ th variable, the model will try to minimise the error by assigning it a valid coefficient. If it fails to do so, i.e. if the new variable isn't helping at all, it will simply assign it a coefficient of 0. Hence, $R^2$ will never decrease. \n",
39+
"\n",
40+
"##### So, the problem is we will never know if the model is getting better by adding additional variables, which is an important thing to know.\n",
41+
"\n",
42+
"So, the solution is to use adjusted $R^2$ which is given by\n",
43+
"\n",
44+
"$$ R^2_{adj} = 1 - (1 - R^2) \\frac{n - 1}{n - p - 1}$$\n",
45+
"\n",
46+
"Where,\n",
47+
"p = number of Regressors (independent variables)\n",
48+
"n = sample size\n",
49+
"\n",
50+
"So basically, it penalizes for the number of variables you use. It is a battle between increase in $R^2$ vs the penalization brought by adding the additional variable"
51+
]
52+
},
53+
{
54+
"cell_type": "markdown",
55+
"metadata": {},
56+
"source": [
57+
"# Interpreting coefficients\n",
58+
"\n",
59+
"Just because the coefficient of a variable is high, it doesn't mean it is more corelated. We should look at the units while interpreting coefficient. Best way to do it is look at the change for a unit change. For instance, if the coefficient is 0.79 we can say, for a unit change i.e. for an additional dollar added into the column, the profit will increase by 79 cents"
60+
]
61+
},
62+
{
63+
"cell_type": "code",
64+
"execution_count": null,
65+
"metadata": {
66+
"collapsed": true
67+
},
68+
"outputs": [],
69+
"source": []
70+
}
71+
],
72+
"metadata": {
73+
"kernelspec": {
74+
"display_name": "Python [conda root]",
75+
"language": "python",
76+
"name": "conda-root-py"
77+
}
78+
},
79+
"nbformat": 4,
80+
"nbformat_minor": 1
81+
}

08. Neural Networks/2A. Backpropagation .ipynb

+76-13
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"## Learning in Neural Networks\n",
7+
"## Backpropagation for humans\n",
88
"\n",
99
"\n",
1010
"This is probably the least understood algorithm in Machine Learning but is extremely intuitive. In this post we'll explore how to mathematically derive backpropagation and get an intuition how it works."
@@ -18,9 +18,9 @@
1818
"The learning process is simply adjusting the weights and biases that's it! The Neural Netowork does this by a process called Backpropagation. The steps are as follows:\n",
1919
"1. Randomly initialise weights\n",
2020
"2. __Forward Pass__: Predict a value using an activation function. \n",
21-
"2. See how bad you're performing using loss function. \n",
22-
"3. __Backward Pass__: Backpropagate the error. That is, tell your network that it's wrong, and also tell what direction it's supposed to go in order to reduce the error. This step updates the weights (here's where the network learns!)\n",
23-
"4. Repeat steps 2 & 3 until the error is reasonably small or for a specified number of iterations. \n",
21+
"3 See how bad you're performing using loss function. \n",
22+
"4. __Backward Pass__: Backpropagate the error. That is, tell your network that it's wrong, and also tell what direction it's supposed to go in order to reduce the error. This step updates the weights (here's where the network learns!)\n",
23+
"5. Repeat steps 2 & 3 until the error is reasonably small or for a specified number of iterations. \n",
2424
"\n",
2525
"Step 3 is the most important step. We'll mathematically derive the equation for updating the values. \n",
2626
"\n",
@@ -73,7 +73,7 @@
7373
"\\end{bmatrix}\n",
7474
"$$\n",
7575
"\n",
76-
"And second level weights as:\n",
76+
"And second layer weights as:\n",
7777
"$$\n",
7878
"\\theta_2 = \n",
7979
"\\begin{bmatrix}\n",
@@ -86,12 +86,12 @@
8686
"$$ z_1^{\\left(2\\right)}=\\theta_{10}^{\\left(1\\right)}+\\theta_{11}^{\\left(1\\right)}x_1+\\theta_{12}^{\\left(1\\right)}x_2 + ....\\text{for all the $z$s}$$\n",
8787
"\n",
8888
"All we do is:\n",
89-
"$$ \\tag 3 z^{\\left(2\\right)}=\\theta^{\\left(2\\right)}\\cdot X $$\n",
89+
"$$ \\tag 3 z^{\\left(2\\right)}=\\theta^{\\left(2\\right)}\\cdot X^T $$\n",
9090
"\n",
9191
"And the activity at the second layer is thus\n",
9292
"$$ \\tag 4 a^{\\left(2\\right)}=\\sigma\\left(z^{\\left(2\\right)}\\right) $$\n",
9393
"Which is the same as:\n",
94-
"$$ \\tag 5 a^{\\left(2\\right)}=\\sigma\\left(\\theta^{\\left(2\\right)}\\cdot X\\right) $$\n",
94+
"$$ \\tag 5 a^{\\left(2\\right)}=\\sigma\\left(\\theta^{\\left(2\\right)}\\cdot X^T\\right) $$\n",
9595
"\n",
9696
"Repeating the same step for the third layer will give us the output. \n",
9797
"$$ \\tag 6 z^{\\left(3\\right)}=\\theta^{\\left(2\\right)}\\cdot a^{\\left(2\\right)} $$\n",
@@ -105,7 +105,7 @@
105105
"source": [
106106
"## Forward Pass\n",
107107
"\n",
108-
"Let's take an example of a Neural Network to solve the MNIST character recognition problem. Every image is 20x20 pixel in dimension, hence the a single input will (20x20) 400 features. Remember, that the input is the first layer, so the number of neurons in the first layer will be 400. The second layer will be the hidden layer, let's say that the number of neurons in the hidden layers is 25. And since we're predicting whether the image is a number from 0-9 there are 10 discrete outputs, hence the output layer will have 10 neurons. Each of the neuron in output layer will predict a value between 0 and 1. Since these values as probabilities, the value that has the highest probability will be the winner. \n",
108+
"Let's take an example of a Neural Network to solve the MNIST character recognition problem. Every image is 20x20 pixel in dimension, hence the a single input will (20x20) 400 features. Remember, that the input is the first layer, so the number of neurons in the first layer will be 400. The second layer will be the hidden layer, let's say that the number of neurons in the hidden layers is 25. And since we're predicting whether the image is a number from 0-9 there are 10 discrete outputs, hence the output layer will have 10 neurons. Each of the neuron in output layer will predict a value between 0 and 1. Since these values are probabilities, the value that has the highest probability will be the winner. \n",
109109
"\n",
110110
"#### Dimension of (input) X = (5000, 400) \n",
111111
"\n",
@@ -192,7 +192,7 @@
192192
"cell_type": "markdown",
193193
"metadata": {},
194194
"source": [
195-
"### Easier part $\\frac{\\partial J}{\\partial \\theta^{\\left(2\\right)}}$\n",
195+
"## Easier part $\\frac{\\partial J}{\\partial \\theta^{\\left(2\\right)}}$\n",
196196
"\n",
197197
"Calculating $\\frac{\\partial J}{\\partial \\theta^{\\left(2\\right)}}$ is easier than calculating $\\frac{\\partial J}{\\partial \\theta^{\\left(1\\right)}}$ so we'll start by that first. We'll go step by step and try to understand what each step is accomplishing. \n",
198198
"\n"
@@ -219,7 +219,9 @@
219219
"\\frac{\\partial J}{\\partial W^{\\left(2\\right)}} &= \\frac{\\partial\\frac{1}{2}\\left(y-\\hat y\\right)^2}{\\partial W^{\\left(2\\right)}} \\\\\n",
220220
"\\notag\n",
221221
"&= (y-\\hat y)\\cdot\\left(-\\frac{\\partial \\hat y}{\\partial W^{\\left(2\\right)}}\\right)\n",
222-
"\\end{align} $$\n",
222+
"\\end{align} \n",
223+
"$$\n",
224+
"\n",
223225
"We have to differentiate $\\hat y$ to respect the [Chain Rule](https://www.youtube.com/watch?v=6kScLENCXLg). This minus sign in the second term comes from differentiating $-\\hat y$\n",
224226
"\n",
225227
"Using Equation (7) and (8) we have, \n",
@@ -241,16 +243,77 @@
241243
"In the last part of the equation we'll be differentiating $W^{\\left(2\\right)} \\cdot a^{\\left(2\\right)}$ by $W^{\\left(2\\right)}$. We know that the derivative of $4x$ with respect to $x$ is $4$ so the derivative of $W^{\\left(2\\right)} \\cdot a^{\\left(2\\right)}$ with respect $W^{\\left(2\\right)}$ will be $a^{\\left(2\\right)}$\n",
242244
"\n",
243245
"$$ \n",
244-
"\\tag 9\n",
245-
"\\frac{\\partial J}{\\partial W^{\\left(2\\right)}} = \\left(z-y\\right)\\cdot\\sigma'\\left(z^{\\left(3\\right)}\\right)\\cdot\\left(a^{\\left(2\\right)}\\right)$$\n",
246+
"\\frac{\\partial J}{\\partial W^{\\left(2\\right)}} = \\left(z-y\\right)\\cdot\\sigma'\\left(z^{\\left(3\\right)}\\right)\\cdot\\left(a^{\\left(2\\right)}\\right)\n",
247+
"$$\n",
248+
"\n",
249+
"We'll denote the error term in the final layer by $\\delta^{(3)}$\n",
250+
"\n",
251+
"$$ \n",
252+
"\\tag{9}\n",
253+
"\\frac{\\partial J}{\\partial W^{\\left(2\\right)}} = \\delta^{\\left(3\\right)}\\cdot a^{\\left(2\\right)}\n",
254+
"$$\n",
246255
"\n",
247256
"Now, coming back to the summation we ignored at the top of the derivation, we're going to fix that in the implementation using an accumulator matrix which will store the errors for every row and sum it up. "
248257
]
249258
},
250259
{
251260
"cell_type": "markdown",
252261
"metadata": {},
253-
"source": []
262+
"source": [
263+
"## Sucky part $\\frac{\\partial J}{\\partial \\theta^{\\left(1\\right)}}$\n",
264+
"\n",
265+
"It's nearly the same as the previous step, but involves one additional step using chain rule. We'll start in the same way. "
266+
]
267+
},
268+
{
269+
"cell_type": "markdown",
270+
"metadata": {},
271+
"source": [
272+
"$\n",
273+
"\\begin{align}\n",
274+
"\\tag {from (1)}\n",
275+
"\\frac{\\partial J}{\\partial W^{\\left(1\\right)}} &= \\frac{\\partial\\frac{1}{2}\\sum_{i=0}^m\\left(y-\\hat y\\right)^2}{\\partial W^{\\left(1\\right)}} \\\\\n",
276+
"\\notag\n",
277+
"&=\\frac{\\sum_{i=0}^m\\partial\\frac{1}{2}\\left(y-z\\right)^2}{\\partial W^{\\left(1\\right)}} \\\\\n",
278+
"&= (y-\\hat y)\\cdot\\left(-\\frac{\\partial \\hat y}{\\partial W^{\\left(1\\right)}}\\right)\n",
279+
"\\end{align}\n",
280+
"$\n",
281+
"Not that we've skipped the summation sign as before. Mathematicians might be cursing me at this point. \n",
282+
"\n",
283+
"$$\n",
284+
"\\begin{align}\n",
285+
"\\notag\n",
286+
"\\frac{\\partial J}{\\partial W^{\\left(1\\right)}} &= \\left(z-y\\right)\\cdot\\sigma'\\left(z^{\\left(3\\right)}\\right)\\cdot\\left(\\frac{\\partial z^{\\left(3\\right)}}{\\partial W^{\\left(1\\right)}}\\right) \\\\\n",
287+
"\\end{align}\n",
288+
"$$\n",
289+
"\n",
290+
"Things start to get a little different here. We cannot directly differentiate $z^{(3)}$ with respect to $W^{(1)}$ because $z^{(3)}$ does not directly depend on $W^{(1)}$. So we will use our good ol' chain rule again and divide it further.\n",
291+
"\n",
292+
"$$\n",
293+
"\\frac{\\partial J}{\\partial W^{\\left(1\\right)}} = \\left(z-y\\right)\\cdot\\sigma'\\left(z^{\\left(3\\right)}\\right)\\cdot \\frac{\\partial z^{\\left(3\\right)}}{\\partial a^{\\left(2\\right)}}\\cdot\\frac{\\partial a^{\\left(2\\right)}}{\\partial W^{\\left(1\\right)}}\n",
294+
"$$\n",
295+
"\n",
296+
"Replacing the value of $\\delta^{(3)}$ from equation (9)\n",
297+
"\n",
298+
"$$\n",
299+
"\\frac{\\partial J}{\\partial W^{\\left(1\\right)}} = \\delta^{(3)} \\cdot \\frac{\\partial z^{\\left(3\\right)}}{\\partial a^{\\left(2\\right)}}\\cdot\\frac{\\partial a^{\\left(2\\right)}}{\\partial W^{\\left(1\\right)}}\n",
300+
"$$\n",
301+
"\n",
302+
"Substituting the value of $z^{(3)}$ from equation (6)\n",
303+
"\n",
304+
"$$\n",
305+
"\\begin{align}\n",
306+
"\\notag\n",
307+
"\\frac{\\partial J}{\\partial W^{\\left(1\\right)}} &= \\delta^{(3)} \\cdot \\frac{\\partial z^{\\left(3\\right)}}{\\partial a^{\\left(2\\right)}}\\cdot\\frac{\\partial a^{\\left(2\\right)}}{\\partial W^{\\left(1\\right)}} \\\\\n",
308+
"&= \\delta^{(3)} \\cdot \\frac{\\partial\\left(W^{\\left(2\\right)}\\cdot a^{\\left(2\\right)}\\right)}{\\partial a^{\\left(2\\right)}} \\cdot\\frac{\\partial a^{\\left(2\\right)}}{\\partial W^{\\left(1\\right)}} \\\\\n",
309+
"&= \\delta^{(3)} \\cdot W^{(2)} \\cdot \\frac{\\partial a^{\\left(2\\right)}}{\\partial W^{\\left(1\\right)}} \\\\\n",
310+
"\\tag{Using (4)}\n",
311+
"&= \\delta^{(3)} \\cdot W^{(2)} \\cdot \\frac{\\partial\\sigma\\left(z^{\\left(2\\right)}\\right)}{\\partial W^{\\left(1\\right)}} \\\\\n",
312+
"\\tag{We've done this before}\n",
313+
"&= \\delta^{(3)} \\cdot W^{(2)} \\cdot \\sigma'\\left(z^{\\left(2\\right)}\\right) \\cdot \\frac{\\partial z^{\\left(2\\right)}}{\\partial W^{\\left(1\\right)}}\n",
314+
"\\end{align}\n",
315+
"$$\n"
316+
]
254317
},
255318
{
256319
"cell_type": "code",

0 commit comments

Comments
 (0)