Title: Prompting Techniques for Improving LLMs' Reasoning Capability

  • LLMs Do Not Have Memory
  • Prompting Techniques for Better Reasoning
  • Multi-action within a Prompt
  • Prompt Chaining
  • Exception Handling
  • Hands-on Walkthrough and Tasks


Considerations for Prompting Techniques in Varying Model Capabilities
  • ✦ The techniques covered in this section are for enhancing the reasoning capability of LLMs, so that the LLMs can produce more accurate and reliable outputs, particularly in complex tasks, by effectively organizing their thought processes and learning from both correct and incorrect reasoning patterns.
    • They are particularly useful for small or less capable models, or when you want to get the best of out the LLM's reasoning capability.
    • You may not be able to replicate the output where the LLM generates incorrect or less desirable outputs, as these issues are more often observed in less capable models such as GPT-3.5 (especially those versions prior to Q3 2023).

  • ✦ In early 2024, the costs for highly capable models like GPT-4 or Claude Opus 3 may lead builders and developers to opt for cheaper models like GPT-3.5-turbo.
    • However, by the second half of 2024, we may see the emergence of highly price-efficient models with very decent performance, such as GPT-4o-mini, Gemini 1.5 Flash, and Claude 3.5 Sonnet.

  • ✦ The majority of models nowadays have improved reasoning capabilities and require less elaborate prompts to achieve desired outcomes. Hence, not incorporating these prompting techniques may not necessarily lead to incorrect outputs.
  • ✦ However, learning and incorporating the patterns of these prompting techniques will result in more robust prompts that a) have a lower chance of generating inaccurate outputs, and b) perform better, especially for complex tasks.



Technique 1: Chain of Thought (CoT) Prompting

  • ✦ The Chain-of-Thought (CoT) is a method where a language model lays out its thought process in a step-by-step manner as it tackles a problem.
  • ✦ This approach is particularly effective in tasks that involve arithmetic and complex reasoning.
  • ✦ By organizing its thoughts, the model frequently produces more precise results.
  • ✦ Unlike conventional prompting that merely seeks an answer, this technique stands out by necessitating the model to elucidate the steps it took to reach the solution.




Technique 2: Zero-Shot Chain of Thoughts

  • ✦ Zero Shot Chain of Thought (Zero-shot-CoT) prompting is a follow up to CoT prompting, which introduces an incredibly simple zero shot prompt.
  • ✦ Studies have found that by appending the words "Let's think step by step." to the end of a question, LLMs are able to generate a chain of thought that answers the question.




Technique 3: Contrastive Chain-of-Thought

  • ✦ Contrastive Chain-of-Thought is a strategy that introduces an incorrect explanation alongside the correct reasoning in response to a CoT prompt.
    • This approach has shown significant advancements over the traditional CoT, particularly in areas like arithmetic reasoning and answering factual questions.
    • The utilization of this method enables the AI model to comprehend not just the accurate steps of reasoning, but also the mistakes to steer clear of, thereby boosting its overall capacity for reasoning.




Technique 4: Least-to-Most Prompting

  • ✦ Least to Most prompting (LtM)1 takes CoT prompting a step further by first breaking a problem into sub problems then solving each one. It is a technique inspired by real-world educational strategies for children.
  • ✦ As in CoT prompting, the problem to be solved is decomposed in a set of subproblems that build upon each other. In a second step, these subproblems are solved one by one. Contrary to chain of thought, the solution of previous subproblems is fed into the prompt trying to solve the next problem.
  • ✦ This approach has shown to be effective in generalizing to more difficult problems than those seen in the prompts. For instance, when the GPT-3 model or equivalent is used with LtM, it can solve complex tasks with high accuracy using just a few exemplars, compared to lower accuracy with CoT prompting.

Try out the practical examples in Weekly Tasks - Week 03