In one interesting paper the authors emphasize the importance of a standardized taxonomy for LLM prompts targeted towards solving complex tasks and, subsequently, provide such a taxonomy, i.e., TELeR, which can be utilized by multiple independent researchers who are conducting such research studies in order to report their results using a single unified standard.
As developers we can look at the best level (i.e. level 6) and use all the listed promt details for our prompts we design for our task.
Chain of Thought is essentially a prompting approach that gives the model not only more or better context like the previous methods (e.g. few-shot prompting, 6 golden prompting rules..) but let model think longer to get the correct answer. More thinking here means more computation. As you can see in the figure below without chain-of-thought thinking the model has basically just one forward pas through the transformer model to get to the correct answer. That might be enough for simple tasks like sentiment analysis but not enough for more complex tasks. With this think step by step approach the model can use more computations to get to the correct result. This is actually not very different from who human are instructed to work step by step through complex problems.
Master Prompt Engineering
Familiarize yourself with different prompting frameworks
Go through the learning material below
Apply and try the learned prompt engineering techniques on your project and report on your findings in the next session
Prompt engineering is a relatively new discipline for developing and optimizing prompts(a.k.a the text inputs) to get the best out of large language models (LLMs) for a wide variety of tasks. This means that we manipulate the text input to the model with the goal to get the best or most desired output out of the model.
Prompt engineering skills generally help us to better understand the capabilities and limitations of LLMs just as they are very valuable to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning.
We can also look at it, that Prompt Engineering is referring to methods on how to communicate with the LLM to steer its behavior towards desired outcomes. One key point of prompt engineering methods is that they don't touch/change the model weights. So the LLM is completely frozen and the only change is happening in the input values - the prompts.
Prompt engineering is a very empirical science and the effect of specific prompt engineering methods can vary a lot among models, thus requiring heavy experimentation and heuristics.
We will now look at different prompt engineering methods..
Zero-shot and few-shot learning are two most basic approaches for prompting the model, pioneered by many LLM papers and commonly used for benchmarking LLM performance.
Zero-shot learning is to simply feed the task text to the model and ask for the result.
Prompt:
Output:
As you see for sophisticated LLMs and easy enough tasks this is already enough to achieve the aim.
Few-shot learning presents a set of demonstrations, each consisting of both input and desired output, on the target task. Normally they are high quality examples. As the model first sees good examples, it can better understand human intention and criteria for what kinds of answers are expected. Therefore, few-shot learning often leads to better performance than zero-shot. However, it comes at the cost of more token consumption.
Prompt:
Output_
For further improvements we first have to understand which are the elements of a prompt.
A prompt contains any of the following elements:
Instruction - a specific task or instruction you want the model to perform
Context - external information or additional context that can steer the model to better responses
Input Data - the input or question that we are interested to find a response for
Output Indicator - the type or format of the output.
You do not need all the four elements for a prompt and the format depends on the task at hand.
Usually, the more specific and relevant the context is to the task we are trying to perform, the better.
We should be very specific about the instruction and task we want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when we have a desired outcome or style of generation we are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
Here a the golden 6 bullet points for good prompting:
Write as clearly and precisely as possible.
Provide as much context as possible/necessary.
Really use ChatGPT as a chat interaction.
Iterate until you are satisfied with the results.
Break down the task into individual steps.
Provide examples.
Next we will go to see specific prompting frameworks which increase the outputs especially for more complex tasks even more.
Ressources:
https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/
The idea is to sample multiple, diverse reasoning paths through few-shot CoT, and use the generations to select the most consistent answer.
First generate some knowledge given the question. Then answer the question with the knowledge as context in the second step
ToT maintains a tree of thoughts, where thoughts represent coherent language sequences that serve as intermediate steps toward solving a problem. In plain english this means that multiple solution approaches are tried and one keeps kind of log what works or. If you have solved a Suduko you probalbly are familiar with this back and forth exploration and in one of the original ToT papers they reported huge success with this method for solving sudokus.
The ToT approach essentially enables an LM to self-evaluate the progress intermediate thoughts make towards solving a problem through a deliberate reasoning process. The LM's ability to generate and evaluate thoughts is then combined with search algorithms (e.g., breadth-first search and depth-first search) to enable systematic exploration of thoughts with lookahead and backtracking. While this leads to much better complex task solving capabilities it takes hugely more LM calls and takes quite some time..so if you want to build a chatbot this approach should not be used.