I want to start this blog with some of my personal notes before jumping on the book.
- Below is the notebook with examples of 1-dimensional, 2-dimensional and 3-dimensional array using numpy.
2. What is a norm?
A norm is a measure of distance and have three properties
- All distances are positive
- ||v|| >= 0
- Distances multiply with scalar multiplication
- ||av|| = |a|.||v||
- If I travel from A to B then B to C that is at least as far as going from A to C
- || v + w || =< ||v|| + ||w||
- Also referred as triangular inequality
L1 Norm – Also called as Manhattan Distance, is sum of all the real absolute distances.
L2 Norm (Euclidean Norm) – It’s the default norm. It is the shortest between the 2 points irrespective of the dimensions.
L-infinity norm : The L∞ norm is the largest absolute value of any entry in the vector
3. Given a vector lets calculate the L0, L1, L2 and L∞ for it.
4. Calculate the norm using numpy
Now lets jump to Chapter-4 of the fastai book –
- What is the difference between NumPy Array and PyTorch Tensors?
NumPy does not support using GPU or calculating Gradients which are both critical for deep learning.
PyTorch Tensor can live on the GPU in which case their computation will be optimized for the GPU and can run faster. In addition, PyTorch can automatically calculate derivates.
2. Demo of NumPy array and PyTorch tensor
4. Under the Hood: Training a Digit Classifier
The notebook has the step by step approach taken to classify the mnist (handwritten digits) images without using deep learning.
I am off to write the part-II of the notes :). The second part of the blog will include Stochastic Gradient Descent (SGD), Calculating Gradient, Learning Rate, MNIST loss function and mini-batches. Stay tuned!