Skip to content

Commit 03e95ed

Browse files
committed
added assignment2 skeleton
1 parent 2e8d29d commit 03e95ed

File tree

2 files changed

+123
-0
lines changed

2 files changed

+123
-0
lines changed

_config.yml

+2
Original file line numberDiff line numberDiff line change
@@ -21,3 +21,5 @@ kramdown:
2121
# links to homeworks
2222
hw_1_jupyter: https://cs231n.github.io/assignments/2020/assignment1_jupyter.zip
2323
hw_1_colab: https://cs231n.github.io/assignments/2020/assignment1_colab.zip
24+
hw_2_jupyter:
25+
hw_2_colab:

assignments/2020/assignment2.md

+121
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,121 @@
1+
---
2+
layout: page
3+
title: Assignment 2
4+
mathjax: true
5+
permalink: /assignments2020/assignment2/
6+
---
7+
8+
This assignment is due on **Wednesday, May 6 2020** at 11:59pm PST.
9+
10+
<details>
11+
<summary>Handy Download Links</summary>
12+
13+
<ul>
14+
<li><a href="{{ site.hw_2_colab }}">Option A: Colab starter code</a></li>
15+
<li><a href="{{ site.hw_2_jupyter }}">Option B: Jupyter starter code</a></li>
16+
</ul>
17+
</details>
18+
19+
- [Goals](#goals)
20+
- [Setup](#setup)
21+
- [Option A: Google Colaboratory (Recommended)](#option-a-google-colaboratory-recommended)
22+
- [Option B: Local Development](#option-b-local-development)
23+
- [Q1: Fully-connected Neural Network (20 points)](#q1-fully-connected-neural-network-20-points)
24+
- [Q2: Batch Normalization (30 points)](#q2-batch-normalization-30-points)
25+
- [Q3: Dropout (10 points)](#q3-dropout-10-points)
26+
- [Q4: Convolutional Networks (30 points)](#q4-convolutional-networks-30-points)
27+
- [Q5: PyTorch / TensorFlow on CIFAR-10 (10 points)](#q5-pytorch--tensorflow-on-cifar-10-10-points)
28+
- [Submitting your work](#submitting-your-work)
29+
30+
### Goals
31+
32+
In this assignment you will practice writing backpropagation code, and training Neural Networks and Convolutional Neural Networks. The goals of this assignment are as follows:
33+
34+
- Understand **Neural Networks** and how they are arranged in layered architectures.
35+
- Understand and be able to implement (vectorized) **backpropagation**.
36+
- Implement various **update rules** used to optimize Neural Networks.
37+
- Implement **Batch Normalization** and **Layer Normalization** for training deep networks.
38+
- Implement **Dropout** to regularize networks.
39+
- Understand the architecture of **Convolutional Neural Networks** and get practice with training these models on data.
40+
- Gain experience with a major deep learning framework, such as **TensorFlow** or **PyTorch**.
41+
42+
### Setup
43+
44+
You can work on the assignment in one of two ways: **remotely** on Google Colaboratory or **locally** on your own machine.
45+
46+
**Regardless of the method chosen, ensure you have followed the [setup instructions](/setup-instructions) before proceeding.**
47+
48+
#### Option A: Google Colaboratory (Recommended)
49+
50+
**Download.** Starter code containing Colab notebooks can be downloaded [here]({{site.hw_1_colab}}).
51+
52+
If you choose to work with Google Colab, please familiarize yourself with the [recommended workflow]({{site.baseurl}}/setup-instructions/#working-remotely-on-google-colaboratory).
53+
54+
**Note 1**. Please make sure that you work on the Colab notebooks in the order of the questions (see below). The reason is that the code cells that get executed *at the end* of the notebooks save the modified files back to your drive and some notebooks may require code from previous notebook.
55+
56+
**Note 2**. Related to above, ensure you are periodically saving your notebook (`File -> Save`), and any edited `.py` files relevant to that notebook (i.e. **by executing the last code cell**) so that you don't lose your progress if you step away from the assignment and the Colab VM disconnects.
57+
58+
Once you have completed all Colab notebooks **except `collect_submission.ipynb`**, proceed to the [submission instructions](#submitting-your-work).
59+
60+
#### Option B: Local Development
61+
62+
**Download.** Starter code containing jupyter notebooks can be downloaded [here]({{site.hw_1_jupyter}}).
63+
64+
**Install Packages**. Once you have the starter code, activate your environment (the one you installed in the [Software Setup]({{site.baseurl}}/setup-instructions/) page) and run `pip install -r requirements.txt`.
65+
66+
**Download CIFAR-10**. Next, you will need to download the CIFAR-10 dataset. Run the following from the `assignment2` directory:
67+
68+
```bash
69+
cd cs231n/datasets
70+
./get_datasets.sh
71+
```
72+
**Start Jupyter Server**. After you have the CIFAR-10 data, you should start the Jupyter server from the
73+
`assignment1` directory by executing `jupyter notebook` in your terminal.
74+
75+
Complete each notebook, then once you are done, go to the [submission instructions](#submitting-your-work).
76+
77+
### Q1: Fully-connected Neural Network (20 points)
78+
79+
The notebook `FullyConnectedNets.ipynb` will introduce you to our
80+
modular layer design, and then use those layers to implement fully-connected
81+
networks of arbitrary depth. To optimize these models you will implement several
82+
popular update rules.
83+
84+
### Q2: Batch Normalization (30 points)
85+
86+
In notebook `BatchNormalization.ipynb` you will implement batch normalization, and use it to train deep fully-connected networks.
87+
88+
### Q3: Dropout (10 points)
89+
90+
The notebook `Dropout.ipynb` will help you implement Dropout and explore its effects on model generalization.
91+
92+
### Q4: Convolutional Networks (30 points)
93+
In the IPython Notebook `ConvolutionalNetworks.ipynb` you will implement several new layers that are commonly used in convolutional networks.
94+
95+
### Q5: PyTorch / TensorFlow on CIFAR-10 (10 points)
96+
For this last part, you will be working in either TensorFlow or PyTorch, two popular and powerful deep learning frameworks. **You only need to complete ONE of these two notebooks.** You do NOT need to do both, and we will _not_ be awarding extra credit to those who do.
97+
98+
Open up either `PyTorch.ipynb` or `TensorFlow.ipynb`. There, you will learn how the framework works, culminating in training a convolutional network of your own design on CIFAR-10 to get the best performance you can.
99+
100+
### Submitting your work
101+
102+
**Important**. Please make sure that the submitted notebooks have been run and the cell outputs are visible.
103+
104+
Once you have completed all notebooks and filled out the necessary code, there are **_two_** steps you must follow to submit your assignment:
105+
106+
**1.** If you selected Option A and worked on the assignment in Colab, open `collect_submission.ipynb` in Colab and execute the notebook cells. If you selected Option B and worked on the assignment locally, run the bash script in `assignment2` by executing `bash collectSubmission.sh`.
107+
108+
This notebook/script will:
109+
110+
* Generate a zip file of your code (`.py` and `.ipynb`) called `a2.zip`.
111+
* Convert all notebooks into a single PDF file.
112+
113+
**Note for Option B users**. You must have (a) `nbconvert` installed with Pandoc and Tex support and (b) `PyPDF2` installed to successfully convert your notebooks to a PDF file. Please follow these [installation instructions](https://nbconvert.readthedocs.io/en/latest/install.html#installing-nbconvert) to install (a) and run `pip install PyPDF2` to install (b). If you are, for some inexplicable reason, unable to successfully install the above dependencies, you can manually convert each jupyter notebook to HTML (`File -> Download as -> HTML (.html)`), save the HTML page as a PDF, then concatenate all the PDFs into a single PDF submission using your favorite PDF viewer.
114+
115+
If your submission for this step was successful, you should see the following display message:
116+
117+
`### Done! Please submit a2.zip and the pdfs to Gradescope. ###`
118+
119+
**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/103764).
120+
121+
**Note for Option A users**. Remember to download `a2.zip` and `assignment.pdf` locally before submitting to Gradescope.

0 commit comments

Comments
 (0)