# Week 7 - Transformers & Hugging Face

## Course session

**Explanatory Session Part 1**

Self-attention and multihead attention

**Hugging Face Introduction**

Library and Use of HuggingFace for working with transfomer models

{% embed url="<https://colab.research.google.com/drive/17B8QU2hM8sxmrbvvigq1orH_Angf9qsP?usp=sharing>" %}

**Explanatory Session Part 2**

Transformer Encoder and Positional Encoding

**Explanatory Session Part 3**

Vision Transformer

**Walk-through**

Finetuning Vision Transformer on Kaggle Paddy Dataset

{% embed url="<https://colab.research.google.com/drive/1hRMznfY1zPhR36-Qs2IteU215Zyo2qQL?usp=sharing>" %}

## To-do

😊

Look at current Kaggle competitions and make proposals

😊😊

Go through this excellent site explaining Transformers:&#x20;

{% embed url="<http://jalammar.github.io/illustrated-transformer/>" %}

Do Chapter 1-3 of the HuggingFace NLP course

{% embed url="<https://huggingface.co/learn/nlp-course/chapter1/1>" %}

😊😊😊

{% embed url="<https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial6/Transformers_and_MHAttention.html>" %}

Look closer at the Pytorch module `nn.Transformer` ([documentation](https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html)) and go through a [tutorial](https://pytorch.org/tutorials/beginner/transformer_tutorial.html) on how to use it for next token prediction.

Watch this excellent "Build from Scratch" video from Andrej Karpathy

{% embed url="<https://youtu.be/kCc8FmEb1nY>" %}

{% embed url="<https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial15/Vision_Transformer.html>" %}
