# fastai : Just Go out and Play [Chapter -1]

For quite some time I have been thinking to start writing about data science and machine learning, but somehow it wasn’t happening. Recently, I joined 25 weeks long fastbook reading session organized by Weights & Biases and lead by Aman Arora ( awesome instructor 👏). This is my first time participating in a reading session and it has been amazing experience. There is so much participation during and after the lecture that it makes learning fun

I would also like to thank Rachel Thomas, Jeremy Howard, and Sylvain Gugger for their contribution at building community around AI and for fast.ai

My fastai blog series is more of self notes for myself but if in any way it helps you I will be glad 🙂. So lets start with my notes from Chapter – 1 : Your Deep Learning Journey

1. What is Machine Learning

Training of programs, developed by allowing a computer to learn from its experience, rather than manually coding the individual steps. It can also be summarized as a discipline in which we define a program not by writing it entirely ourselves, but by learning from data.

A typical traditional program would look something like below wherein a program takes an input and spits out result based on the algorithm of the program

But in a machine learning world there is little more to it as show below. Model is a special kind of program, it’s one that can do many different things, depending on the weights. It can be implemented in many different ways. The inputs to the model is constant, but you can update weights to improve performance how wrong or right the model is. A trained model know how to do the task that it is taught to do.

2. What is Deep Leaning

Deep learning is a modern area in the more general discipline of machine learning.It is compute technique to extract and transform data using multiple layers of neural networks. Neural Networks are collections of neurons linked to one-another. Deep learning models can automatically learn to extract useful features from raw data. A typical neural network or multi-layer perceptron will have input layer followed by one or many hidden layers followed by output layer.

If you think of neural network as a mathematical functions, it turns out to be a function that is extremely flexible depending on its weights. A mathematical proof called the universal approximation theorem shows that this function can solve any problem to any level of accuracy in theory. This means finding good weights assignments can help create a suitable model. And so to tune/update this weights something called Stochastic gradient descent (SGD) is used.

Fundamental things about training a deep learning model :

• A model cannot be created without data.
• A model can learn to operate on only patterns seen in the inputs.
• This learning approach creates only predictions, not recommended actions.

3. Deep Learning is for Everyone

3. Where is it used

In today’s world deep learning in being used in different areas and to name a few –

• Natural language processing
• Computer vision
• Medicine
• Biology
• Image generation
• Recommendation systems

4. fastai

fastai is based on PyTorch. PyTorch is a low-level foundational library and fastai adds higher-level functionality over it. The book teaches the “top-down” approach. The analogy is, if you were to learn playing football, the first thing you should do is get a football and just start playing with it. Slowly gradually as you spend more time in learning you will get better at it.

5. GPU and Jupyter Notebooks

Most of the deep learning needs access to GPU. GPU stands for Graphical Processing Unit aka graphics card. It can handle thousands of single tasks at the same time and helps in multifold performance.

Jupyter notebook has become the go to software for developing and interacting with deep learning model by many data scientists and machine learning engineers.

6. Dataset

Without data there is no machine learning. So a set of data which could be text, images, emails, tabular etc is called dataset. Using this dataset you train a model.

7. The First Model

As told before just go out there and play the first game of football. Here is the code to build and train a Cat and Dog classifier using fastai.

```!pip install -Uqq fastbook
import fastbook
fastbook.setup_book()

from fastai.vision.all import *
path = untar_data(URLs.PETS)/'images'

def is_cat(x): return x[0].isupper()

path,
get_image_files(path),
valid_pct=0.2,
seed=42,
label_func=is_cat,
item_tfms=Resize(224)
)

learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(4)
```

Lets dig a little deeper on what the above code is doing –

The first 3 lines of the codes install all the python software necessary for the course – fastbook

The fifth line of the code imports all the fast.ai vision library

The sixth line of the code uses untar_data function to downloads the PETS dataset from the fast.ai datasets collection. The functions can also be used to download data from custom url. The function returns a Path object with extracted location. On my sagemaker notebook the images are downloaded to ~/.fastai/data/ location which is the default when dest parameter is not specified –

In the eighth line, a function called “is_cat” is defined to label dataset based on the filename rule. The filenames start with an uppercase letter if the image is a cat, and a lowercase letter otherwise.. So, from the above screenshot Siamese_63.jpg is 63rd example of an image of a Siamses cat in the dataset and wheaten_terrier_139.jpg is of a dog.

The below code of block tells fastai what kind of dataset we have, and how it is structured.

```dls = ImageDataLoaders.from_name_func(
path,
get_image_files(path),
valid_pct=0.2,
seed=42,
label_func=is_cat,
item_tfms=Resize(224)
)```

As this is an image classification problem, we are using the ImageDataLoders class. We are providing the path variable which is the location of all the images. Then fastai has get_image_files function which is taking path as the input and it returns all the files that are included in that path. The next parameter is the valid_pct, which states what percentage of the total data needs to be held out and should only be used for validation. This 20% data is called validation set and is used to measure the accuracy of the model. fastai will always show you your model’s accuracy using only the validation set, never the training set. Using label_func we are telling fastai how to get the labels from the dataset. Finally we define Transform. . A `Transform` contains code that is applied automatically during training. There are two kinds: `item_tfms` are applied to each item (in this case, each item is resized to a 224-pixel square), while `batch_tfms` are applied to a batch of items at a time using the GPU, so they’re particularly fast.

Notes from Aman’s session –

• Item and Batch Transformation and Epochs

Each batch contains 10 images, once all the batches has been passed to the GPU once it’s called one epoch. So when you model has seen all of your dataset once its called one epoch.

• Training and Validation set
• Example of Overfitting

We want the model to learn the general patterns from the training data set and not to memorize it. Memorining the dataset is what causes overfitting.

Next in the code is where image recognizer tells fastai to create a convolutional neural network (CNN) and specifies what architecture to use, what data we want to train it on, and what metric to use

`learn = cnn_learner(dls, resnet34, metrics=error_rate)`

CNN is current state-of-the-art approach to create computer vision models. ResNet is a particular type of CNN and 34 in resnet34 refers to the number of layers in this model architecture. The concept which we use here is called Transfer Learning. Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. Using such pretrainined models allows us to train more accurate models more quickly with less data and less time.

Next is the metrics. A metrics is a function that measures the quality of the model’s predictions using the validation data set and is printed at the end of each epoch. error_rate tells the % of images in the validation set that are being classified incorrectly. Another common metric for classification is accuracy (1.0 – error_rate)

In Figure 3 we have a block mentioned as loss. Now loss and metrics are distinctive. The purpose of loss is to define a “measure of performance” that the training system can use to update weights automatically. Loss helps SGD to let know how to update the parameters/weights. But a metric ic defined for human consumption to tells us how is the model doing. It measures the accuracy between the actual labels and the prediction. It is not used by the model to do the learning.

cnn_learner also has a parameter called pre_trained which defaults to True which sets the weights in your model to values that have already been trained. The part of pretrained models will handle edges, gradients and color detection. When using a pretrained model, cnn_learner will remove the last layer, since that is always specifically customized to the original training task and replace it with one or more new layers with randomized weights, of an appropriate size for the dataset you are working with. The last part of the model is known as head.

The last line of code tells fastai how to fit the model:

`learn.fine_tune(4)`

fastai has two methods, fine_tune and fit. The fit method fit a model, looks at images in training datasets multiple times, each time updating the parameters/weights to make the predictions closer to the target label. In this case we started with resnet34 pretrained model so fine_tune will perform the required fine-tuning only.

fine_tune does two steps:

1. Use one epoch to fit just those parts of the model necessary to get the new random head to work correctly with your dataset.
2. Use the number of epochs requested when calling the methods to fit the entire model, updating the weights of the later layers (especially the head) faster than the earlier layers.

The head of the model is the part that is newly added to be specific to the new dataset.

9. Image recognizers can tackle non-image tasks – The book has some interesting instances where image recognition techniques were used for solving non-image problem. For example, Fast.ai student Ethan Sutin converted sound to a spectrogram and used image recognition techniques to beat the published accuracy.

10. Deep Learning Vocabulary

11. Deep Learning for Segmentation using fastai

A model that can recognize the content of every individual pixel in an image is called segmentation.

```path = untar_data(URLs.CAMVID_TINY)
path, bs=8, fnames = get_image_files(path/"images"),
label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',
)

learn = unet_learner(dls, resnet34)
learn.fine_tune(8)

learn.show_results(max_n=6, figsize=(20,20))
```

12. Deep Learning for Natural Language Processing (NLP) using fastai

```from fastai.text.all import *

learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
learn.fine_tune(4, 1e-2)

learn.predict("I really liked that movie!")
```

13. Deep Learning for Tabular Dataset using fastai

```from fastai.tabular.all import *

cat_names = ['workclass', 'education', 'marital-status', 'occupation',
'relationship', 'race'],
cont_names = ['age', 'fnlwgt', 'education-num'],
procs = [Categorify, FillMissing, Normalize])

learn = tabular_learner(dls, metrics=accuracy)

learn.fit_one_cycle(3)

learn.show_results()
```

14. Validation Sets and Test Sets

The first step is to split the dataset into two sets:

• training set – on which the model is trained
• validation set – is used to evaluate a given model. This dataset is used to fine-tune the model hyperparameters.

The test data set is used only to evaluate the model at the very end. The training data is fully exposed, the validation data is less exposed, and test data is totally hidden.

That concludes my notes from chapter -1. Stay tuned for the upcoming chapters 🙂

References –