Skip to content

Commit 3a92e8b

Browse files
committed
Instructions to run locally; close #2, close #4
1 parent 7a96d30 commit 3a92e8b

19 files changed

+54
-30
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ assignment1/assignment.pdf
1212

1313
# Assignment 2 generated
1414
assignment2/cs231n/build
15+
assignment2/cs231n/im2col_cython.cpython*
1516
assignment2/cs231n/datasets/cifar-10-batches-py
1617
assignment2/a2.zip
1718
assignment2/assignment.pdf

README.md

+29-3
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
<h1 align="center">CS231n: Assignment Solutions</h1>
22
<p align="center"><b>Convolutional Neural Networks for Visual Recognition</b></p>
3-
<p align="center"><i>Stanford - Spring 2021</i></p>
3+
<p align="center"><i>Stanford - Spring 2021-2023</i></p>
44

55
## About
66
### Overview
7-
These are my solutions for the **CS231n** course assignments offered by Stanford University (Spring 2021). Solutions work for further years like 2022. Inline questions are explained in detail, the code is brief and commented (see examples below). From what I investigated, these should be the shortest code solutions (excluding open-ended challenges). In assignment 2, _DenseNet_ is used in _PyTorch_ notebook and _ResNet_ in _TensorFlow_ notebook.
7+
These are my solutions for the **CS231n** course assignments offered by Stanford University (Spring 2021). Solutions work for further years like 2022, 2023. Inline questions are explained in detail, the code is brief and commented (see examples below). From what I investigated, these should be the shortest code solutions (excluding open-ended challenges). In assignment 2, _DenseNet_ is used in _PyTorch_ notebook and _ResNet_ in _TensorFlow_ notebook.
88

9-
> Check out the solutions for **[CS224n](https://github.com/mantasu/cs224n)**. From what I checked, they contain more comprehensive explanations than others.
9+
> Check out the solutions for **[CS224n](https://github.com/mantasu/cs224n)**. They contain more comprehensive explanations than others.
1010
1111
### Main sources (official)
1212
* [**Course page**](http://cs231n.stanford.edu/index.html)
@@ -42,7 +42,33 @@ These are my solutions for the **CS231n** course assignments offered by Stanford
4242

4343
<br>
4444

45+
## Running Locally
46+
47+
It is advised to run in [Colab](https://colab.research.google.com/), however, you can also run locally. To do so, first, set up your environment - either through [conda](https://docs.conda.io/en/latest/) or [venv](https://docs.python.org/3/library/venv.html). It is advised to install [PyTorch](https://pytorch.org/get-started/locally/) in advance with GPU acceleration. Then, follow the steps:
48+
1. Install the required packages:
49+
```bash
50+
pip install -r requirements.txt
51+
```
52+
2. Change every first code cell in `.ipynb` files to:
53+
```bash
54+
%cd cs231n/datasets/
55+
!bash get_datasets.sh
56+
%cd ../../
57+
```
58+
3. Change the first code cell in section **Fast Layers** in [ConvolutionalNetworks.ipynb](assignment2/ConvolutionalNetworks.ipynb) to:
59+
```bash
60+
%cd cs231n
61+
!python setup.py build_ext --inplace
62+
%cd ..
63+
```
64+
65+
I've gathered all the requirements for all 3 assignments into one file [requirements.txt](requirements.txt) so there is no need to additionally install the requirements specified under each assignment folder. If you plan to complete [TensorFlow.ipynb](assignment2/TensorFlow.ipynb), then you also need to additionally install [Tensorflow](https://www.tensorflow.org/install).
66+
67+
68+
> **Note**: to use MPS acceleration via Apple M1, see the comment in [#4](https://github.com/mantasu/cs231n/issues/4#issuecomment-1492202538).
69+
4570
## Examples
71+
4672
<details><summary><b>Inline question example</b></summary>
4773
<br>
4874
<b>Inline Question 1</b>

assignment1/cs231n/classifiers/k_nearest_neighbor.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
from builtins import range
22
from builtins import object
33
import numpy as np
4-
from past.builtins import xrange
54

65

76
class KNearestNeighbor(object):

assignment1/cs231n/classifiers/linear_classifier.py

-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@
55
import numpy as np
66
from ..classifiers.linear_svm import *
77
from ..classifiers.softmax import *
8-
from past.builtins import xrange
98

109

1110
class LinearClassifier(object):

assignment1/cs231n/classifiers/linear_svm.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
from builtins import range
22
import numpy as np
33
from random import shuffle
4-
from past.builtins import xrange
54

65

76
def svm_loss_naive(W, X, y, reg):

assignment1/cs231n/classifiers/softmax.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
from builtins import range
22
import numpy as np
33
from random import shuffle
4-
from past.builtins import xrange
54

65

76
def softmax_loss_naive(W, X, y, reg):

assignment1/cs231n/features.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
from __future__ import print_function
22
from builtins import zip
33
from builtins import range
4-
from past.builtins import xrange
54

65
import matplotlib
76
import numpy as np

assignment1/cs231n/gradient_check.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
from __future__ import print_function
22
from builtins import range
3-
from past.builtins import xrange
43

54
import numpy as np
65
from random import randrange

assignment1/cs231n/layers.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -149,8 +149,8 @@ class for the ith input.
149149
###########################################################################
150150
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
151151

152-
N = len(y) # number of samples
153-
x_true = x[range(N), y][:, None] # scores for true labels
152+
N = len(y) # number of samples
153+
x_true = x[range(N), y][:, None] # scores for true labels
154154
margins = np.maximum(0, x - x_true + 1) # margin for each score
155155
loss = margins.sum() / N - 1
156156
dx = (margins > 0).astype(float) / N
@@ -187,8 +187,8 @@ class for the ith input.
187187

188188
N = len(y) # number of samples
189189

190-
P = np.exp(x - x.max()) # numerically stable exponents
191-
P /= P.sum(axis=1, keepdims=True) # row-wise probabilities (softmax)
190+
P = np.exp(x - x.max(axis=1, keepdims=True)) # numerically stable exponents
191+
P /= P.sum(axis=1, keepdims=True) # row-wise probabilities (softmax)
192192

193193
loss = -np.log(P[range(N), y]).sum() / N # sum cross entropies as loss
194194

assignment1/cs231n/solver.py

-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
from __future__ import print_function, division
2-
from future import standard_library
32

4-
standard_library.install_aliases()
53
from builtins import range
64
from builtins import object
75
import os

assignment1/cs231n/vis_utils.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
from builtins import range
2-
from past.builtins import xrange
32

43
from math import sqrt, ceil
54
import numpy as np

assignment1/knn.ipynb

+1-1
Large diffs are not rendered by default.

assignment2/cs231n/gradient_check.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
from __future__ import print_function
22
from builtins import range
3-
from past.builtins import xrange
43

54
import numpy as np
65
from random import randrange

assignment2/cs231n/layers.py

+6-7
Original file line numberDiff line numberDiff line change
@@ -394,11 +394,10 @@ def layernorm_forward(x, gamma, beta, ln_param):
394394
###########################################################################
395395
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
396396

397-
ln_param.setdefault('mode', 'train') # same as batchnorm in train mode
398-
ln_param.setdefault('axis', 1) # over which axis to sum for grad
399-
[gamma, beta] = np.atleast_2d(gamma, beta) # assure 2D to perform transpose
397+
bn_param = {"mode": "train", "axis": 1, **ln_param} # same as batchnorm in train mode + over which axis to sum for grad
398+
[gamma, beta] = np.atleast_2d(gamma, beta) # assure 2D to perform transpose
400399

401-
out, cache = batchnorm_forward(x.T, gamma.T, beta.T, ln_param) # same as batchnorm
400+
out, cache = batchnorm_forward(x.T, gamma.T, beta.T, bn_param) # same as batchnorm
402401
out = out.T # transpose back
403402

404403
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
@@ -836,14 +835,14 @@ def spatial_groupnorm_forward(x, gamma, beta, G, gn_param):
836835
###########################################################################
837836
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
838837

839-
N, C, H, W = x.shape # input dims
840-
gn_param.update({'shape':(W, H, C, N), 'axis':(0, 1, 3)}) # params to reuse batchnorm method
838+
N, C, H, W = x.shape # input dims
839+
ln_param = {"shape":(W, H, C, N), "axis":(0, 1, 3), **gn_param} # params to reuse batchnorm method
841840

842841
x = x.reshape(N*G, -1) # reshape x to use vanilla layernorm
843842
gamma = np.tile(gamma, (N, 1, H, W)).reshape(N*G, -1) # reshape gamma to use vanilla layernorm
844843
beta = np.tile(beta, (N, 1, H, W)).reshape(N*G, -1) # reshape beta to use vanilla layernorm
845844

846-
out, cache = layernorm_forward(x, gamma, beta, gn_param) # perform vanilla layernorm
845+
out, cache = layernorm_forward(x, gamma, beta, ln_param) # perform vanilla layernorm
847846
out = out.reshape(N, C, H, W) # reshape back the output
848847
cache = (G, cache) # cache involves G
849848

assignment2/cs231n/solver.py

-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
from __future__ import print_function, division
2-
from future import standard_library
32

4-
standard_library.install_aliases()
53
from builtins import range
64
from builtins import object
75
import os

assignment2/cs231n/vis_utils.py

-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
from builtins import range
2-
from past.builtins import xrange
32

43
from math import sqrt, ceil
54
import numpy as np

assignment3/Generative_Adversarial_Networks.ipynb

+1-1
Large diffs are not rendered by default.

assignment3/Self_Supervised_Learning.ipynb

+1-1
Large diffs are not rendered by default.

requirements.txt

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Cython
2+
tqdm
3+
h5py
4+
torch
5+
torchvision
6+
tabulate
7+
scipy
8+
pandas
9+
imageio
10+
matplotlib
11+
ipykernel

0 commit comments

Comments
 (0)