Title: Multi-action within a Prompt

  • LLMs Do Not Have Memory
  • Prompting Techniques for Better Reasoning
  • Multi-action within a Prompt
  • Prompt Chaining
  • Exception Handling
  • Hands-on Walkthrough and Tasks

Technique 1: Chaining Actions within the Prompt

  • ✦ Chaining actions involve giving the LLM a prompt that contains a sequence of tasks to be completed one after the other.
    • Each action in the chain builds upon the previous one, allowing for the creation of a multi-step process that the model follows to generate a final output.
    • This technique can enhance the utility and flexibility of LLMs in processing and generating information.
    • How Does it Work?
      • Sequential Instructions: The prompt is structured to include a list of actions that the LLM needs to perform. These actions are ordered logically, ensuring that the output of one action serves as the input or foundation for the next.
      • Clear Delimitation: Each action is clearly delineated within the prompt, often numbered or separated by clear markers. This helps the model understand the sequence of steps it needs to follow.
      • Building Complexity: The initial tasks are usually simpler, with complexity building as the model progresses through the chain. This gradual increase in complexity helps the model maintain focus and apply the information it has processed in earlier steps.



Technique 2: More Structured Step-by-Step Instructions (A.K.A Inner Monologue)

  • ✦ One key benefit of this prompting tactic is that we can extract the relevant part to display to the end-user, while keeping the others part as the "intermediate outputs".
    • Similar to Chain-of-Thought prompting, LLMs can perform better at reasoning and logic problems if you ask them to break the problem down into smaller steps.
    • The "intermediate output" is also known as the inner monologue of the LLM when it is reasoning through the problems.
    • These "intermediate outputs" can be used to verify if the reasoning applied by the LLM is correct or as intended.



Technique 3: Generated Knowledge

The idea behind the generated knowledge approach is to ask the LLM to generate potentially useful information about a given question/prompt before generating a final response.




Potential Limitations & Risk on Multi-actions within a Single Chain

  • Complexity Management:

    • As the chain of actions grows, the prompt can become complex. It's essential to structure the prompt clearly to avoid confusing the model.
    • The effectiveness of complex prompt heavily relies on the skill of prompt engineering, but may still not guarantee consistent and desired output from the LLMs.
  • Error Propagation:

    • Mistakes in early steps can propagate through the chain, affecting the final output. Careful prompt design and error checking are crucial.
    • There is no way to explicitly check the intermediate outputs and use explicit logic (like an if-else statement) to change the flow.
  • Context Dilution

    • As the instructions grows and become more complex, the attention on some instructions may become diluted and may not be followed through by the LLMs.

However, for simpler instructions like those we have seen in the examples above, chaining multiple actions within a prompt will still work relatively well, while offering better speed for the application. This is because making one request to the LLM is generally faster than multiple sequential requests. It also helps to maintains a logical flow of information, ensuring that the output is coherent and contextually relevant across all steps.

Try out the practical examples in Weekly Tasks - Week 03