All pages
Powered by GitBook
1 of 14

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Chain of Thought

Chain of Thought is essentially a prompting approach that gives the model not only more or better context like the previous methods (e.g. few-shot prompting, 6 golden prompting rules..) but let model think longer to get the correct answer. More thinking here means more computation. As you can see in the figure below without chain-of-thought thinking the model has basically just one forward pas through the transformer model to get to the correct answer. That might be enough for simple tasks like sentiment analysis but not enough for more complex tasks. With this think step by step approach the model can use more computations to get to the correct result. This is actually not very different from who human are instructed to work step by step through complex problems.

Image Source: Kojima et al. (2022)

Deep Dive into LLMs

Week 1 - Introduction

This week you will...

  • get to know the course structure and you fellow course mates

  • get an idea about possible projects and the offered default projects

Learning Resources

Until next week you should...

Week 4 - Prompt Engineering

This week you will...

  • Master Prompt Engineering

  • Familiarize yourself with different prompting frameworks

Slides

Until next week you should...

  • Go through the learning material below

  • Apply and try the learned prompt engineering techniques on your project and report on your findings in the next session

Learning Resources

Prompt engineering is a relatively new discipline for developing and optimizing prompts(a.k.a the text inputs) to get the best out of large language models (LLMs) for a wide variety of tasks. This means that we manipulate the text input to the model with the goal to get the best or most desired output out of the model.

Prompt engineering skills generally help us to better understand the capabilities and limitations of LLMs just as they are very valuable to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning.

We can also look at it, that Prompt Engineering is referring to methods on how to communicate with the LLM to steer its behavior towards desired outcomes. One key point of prompt engineering methods is that they don't touch/change the model weights. So the LLM is completely frozen and the only change is happening in the input values - the prompts.

Prompt engineering is a very empirical science and the effect of specific prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.

We will now look at different prompt engineering methods..

Basic Prompting

Zero-shot and few-shot learning are two most basic approaches for prompting the model, pioneered by many LLM papers and commonly used for benchmarking LLM performance.

Zero-Shot

Zero-shot learning is to simply feed the task text to the model and ask for the result.

Prompt:

Output:

As you see for sophisticated LLMs and easy enough tasks this is already enough to achieve the aim.

Few-shot

Few-shot learning presents a set of demonstrations, each consisting of both input and desired output, on the target task. Normally they are high quality examples. As the model first sees good examples, it can better understand human intention and criteria for what kinds of answers are expected. Therefore, few-shot learning often leads to better performance than zero-shot. However, it comes at the cost of more token consumption.

Prompt:

Output_

For further improvements we first have to understand which are the elements of a prompt.

Elements of a Prompt

A prompt contains any of the following elements:

Instruction - a specific task or instruction you want the model to perform

Context - external information or additional context that can steer the model to better responses

Input Data - the input or question that we are interested to find a response for

Output Indicator - the type or format of the output.

You do not need all the four elements for a prompt and the format depends on the task at hand.

Best Practices

Usually, the more specific and relevant the context is to the task we are trying to perform, the better.

We should be very specific about the instruction and task we want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when we have a desired outcome or style of generation we are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.

Here a the golden 6 bullet points for good prompting:

  1. Write as clearly and precisely as possible.

  2. Provide as much context as possible/necessary.

  3. Really use ChatGPT as a chat interaction.

  4. Iterate until you are satisfied with the results.

Next we will go to see specific prompting frameworks which increase the outputs especially for more complex tasks even more.

Ressources:

Week 3 - Introduction to Transformers

This week you will...

  • Master the basics of attention and transformers

  • Familiarize with advanced models

Slides

Learning Resources

  • on Attention

Until next week you should...

2MB
231023_General Introduction.pdf
PDF
Open
introduction to transformers
Contextual Word Representations Part 1

More techniques

Self Consistency

The idea is to sample multiple, diverse reasoning paths through few-shot CoT, and use the generations to select the most consistent answer.

Generated Knowledge Prompting

First generate some knowledge given the question. Then answer the question with the knowledge as context in the second step

Tree of Thoughts (ToT)

ToT maintains a tree of thoughts, where thoughts represent coherent language sequences that serve as intermediate steps toward solving a problem. In plain english this means that multiple solution approaches are tried and one keeps kind of log what works or. If you have solved a Suduko you probalbly are familiar with this back and forth exploration and in one of the original ToT papers they reported huge success with this method for solving sudokus.

The ToT approach essentially enables an LM to self-evaluate the progress intermediate thoughts make towards solving a problem through a deliberate reasoning process. The LM's ability to generate and evaluate thoughts is then combined with search algorithms (e.g., breadth-first search and depth-first search) to enable systematic exploration of thoughts with lookahead and backtracking. While this leads to much better complex task solving capabilities it takes hugely more LM calls and takes quite some time..so if you want to build a chatbot this approach should not be used.

Week 9 - Advisory Session

This week you will...

  • ...

Slides

...

Learning Resources

  • ...

Until next week you should...

Week 10 - Project Presentations

Week 10 - Project Presentations

Week 5 - Agents

This week you will...

...

Until next week you should...

of the LLM Coursera course (Audit Mode)
3MB
06_11_23_Introduction_to_transformers.pdf
PDF
Open
Stanford Lecture on transformers
Rasa series
here
here
week 1

Break down the task into individual steps.

  • Provide examples.

  • https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/
    Classify the text into neutral, negative or positive. 
    
    Text: I think this course is amazing.
    Sentiment:
    positive
    A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses
    the word whatpu is:
    We were traveling in Africa and we saw these very cute whatpus.
    To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses
    the word farduddle is:
    When we won the game, we all started to farduddle in celebration.

    Week 5 - RAG and Agents

    This week you will...

    • Learn about Retrieval augmented generation

    • Agents

    Learning Resources

    • Watch OpenAIs Tips and Tricks on RAG and Finetuning

    • Get to know Openai Function calling

    Until next week you should...

    Week 2 - Tokens & Embeddings revisted

    This week you will...

    • Gain a broad understanding of how language models have evolved, from early methods like n-grams to advanced transformer architectures.

    • Understand the significance and limitations of word embeddings and recurrent neural networks, including LSTMs.

    Slides

    Learning Resources

    • NLP with Deep Learning

    • : NLU

    Until next week you should...

    TELeR: A General Taxonomy of LLM Prompts for Benchmarking Complex Tasks

    In one interesting paper the authors emphasize the importance of a standardized taxonomy for LLM prompts targeted towards solving complex tasks and, subsequently, provide such a taxonomy, i.e., TELeR, which can be utilized by multiple independent researchers who are conducting such research studies in order to report their results using a single unified standard.

    As developers we can look at the best level (i.e. level 6) and use all the listed promt details for our prompts we design for our task.

    Image Source: Santu et al. (2023)

    Week 7 - Fine-Tuning I

    This week you will...

    • know how to prepare the data for training LLMs.

    • get a better technical understanding of how to train LLMs.

    • learn about different alignment approaches such as RLHF and RLAIF using PPO, or DPO.

    Learning Resources

    • from the Hugging Face NLP course.

    • by Andrej Karpathy explaining how to train a GPT from scratch.

    Until next week you should...

    Week 6 - Model Evaluation

    This week you will...

    • get to know Weights & Biases a popular platform to evaluate deep learning models.

    • understand different evaluation metrics.

    • get a high level introduction into training LLMs.

    Learning Resources

    • by Deeplearning.AI and Weights & Biases on how to use the Weights & Biases framework to track and evaluate your model results

    Until next week you should...

    Week 8 - Fine-Tuning II and Model Inference

    This week you will...

    • explore advanced training techniques designed to train large models efficiently, minimizing computational requirements.

    and play around with different embeddings
  • 851KB
    30_10_23_Token_and_embeddings_revisited.pdf
    PDF
    Open
    Stanford CS224N
    Stanford XCS224U
    Stanford lecture
    Stanford lecture
    Rasa Attention Series
    Notebook
    5MB
    231204_Fine-Tuning I.pdf
    PDF
    Open
    Training a causal language model from scratch
    Video
    week 2
    week 3
    6MB
    231127_Model Evaluation.pdf
    PDF
    Open
    short course
    this video
    Training a causal language model from scratch
    gain comprehensive insights into the key hyperparameters for effective model inference.
  • discover the unique attributes of inference processes for streaming Large Language Models (LLMs).

  • Learning Resources

    • Week 2 and week 3 of the course Generative AI with Large Language Models

    Until next week you should...

    3MB
    231211_Fine-Tuning II.pdf
    PDF
    Open
    here
    this short course