Chapter 1 Notes – https://aprakash.wordpress.com/2021/07/05/fastai-just-go-out-and-play-chapter-1/

I want to start this blog with some of my personal notes before jumping on the book.

Below is the notebook with examples of 1-dimensional, 2-dimensional and 3-dimensional array using numpy.
Sorry, something went wrong.

Reload?
Sorry, we cannot display this file.

Sorry, this file is invalid so it cannot be displayed.

2. What is a norm?

A norm is a measure of distance and have three properties

All distances are positive Distances multiply with scalar multiplication If I travel from A to B then B to C that is at least as far as going from A to C|| v + w || =< ||v|| + ||w|| Also referred as triangular inequality
L1 Norm – Also called as Manhattan Distance, is sum of all the real absolute distances.

L2 Norm (Euclidean Norm) – It’s the default norm. It is the shortest between the 2 points irrespective of the dimensions.

L-infinity norm : The L∞ norm is the largest absolute value of any entry in the vector

3. Given a vector lets calculate the L0, L1, L2 and L∞ for it.

4. Calculate the norm using numpy

Sorry, something went wrong.

Reload?
Sorry, we cannot display this file.

Sorry, this file is invalid so it cannot be displayed.

Now lets jump to Chapter-4 of the fastai book –

What is the difference between NumPy Array and PyTorch Tensors?
NumPy does not support using GPU or calculating Gradients which are both critical for deep learning.

PyTorch Tensor can live on the GPU in which case their computation will be optimized for the GPU and can run faster. In addition, PyTorch can automatically calculate derivates.

2. Demo of NumPy array and PyTorch tensor

Sorry, something went wrong.

Reload?
Sorry, we cannot display this file.

Sorry, this file is invalid so it cannot be displayed.

3. Broadcasting

Sorry, something went wrong.

Reload?
Sorry, we cannot display this file.

Sorry, this file is invalid so it cannot be displayed.

4. Under the Hood: Training a Digit Classifier

The notebook has the step by step approach taken to classify the mnist (handwritten digits) images without using deep learning.

Sorry, something went wrong.

Reload?
Sorry, we cannot display this file.

Sorry, this file is invalid so it cannot be displayed.

I am off to write the part-II of the notes :). The second part of the blog will include Stochastic Gradient Descent (SGD), Calculating Gradient, Learning Rate, MNIST loss function and mini-batches. Stay tuned!

Reading session:

Like this: Like Loading...

Related
## One thought on “fastai : Training a Digit Classifier - Baseline Model [Chapter-4] - Part I”