Wondering how to get the most out of Large Language Models (LLMs)? The secret lies in the art of effective prompting. But, what exactly is prompting, and how can we further refine it? Let’s uncover the truth behind prompting and introduce a new method that’s gaining traction: Step-Back Prompting.

Prompting explained: your key to LLM success

When engaging with Large Language Models (LLMs), it’s crucial to provide clear instructions or prompts. These prompts guide LLMs in executing specific tasks. Mastering the craft of prompting can significantly enhance the performance of LLMs, leading to top-tier results.

Recent discussions have brought forward various strategies for effective prompting. In this article, we’re focusing on an innovative approach that’s making waves in the AI industry: Step-Back Prompting.

A recent report, “Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models“, explores this approach. It suggests that when tackling complex tasks, LLMs can benefit from taking a step back to examine the issue from a more abstract perspective. This technique stimulates the model towards more efficient reasoning and more accurate answers.

Unlocking the mechanics of Step-Back Prompting

The”Step-back prompting” draws inspiration from human cognitive processes, where we often step back to view problems from a wider angle when dealing with intricate issues. By abstracting the original question to a higher level, and asking LLM to answer the step-back question first, we can enrich LLM’s understanding of the original problem, leading to a more accurate answer.

An example discussed in the above mentioned report is a historical question, “Which school did Estella Leopold attend from August to November 1954?” This type of question can usually be challenging for an LLM to answer accurately. We can see from the screen shot, LLMs gave wrong answers, both when trying to answer directly or after chain-of-thought reasoning. However, by stepping back and asking, “What is the educational history of Estella Leopold?”, the LLM can compile pertinent information about Estella’s education, and then provide an accurate answer.

The intersection of Step-Back Prompting and Retrieval Augmented Generation (RAG)

Interestingly, this example bears a resemblance to the mechanism of Retrieval Augmented Generation (RAG). Developed by Meta AI researchers, RAG is a two-part framework comprising an information retrieval component and an LLM that generates answers to user queries. RAG addresses two inherent challenges of LLMs: the potential for inaccurate answers and the provision of outdated information. By feeding LLMs credible sources, RAG enhances the accuracy of responses, while allowing LLMs access to up-to-date information.

Here is how RAG works. RAG takes a user inquiry and retrieves a set of relevant documents from a given source (i.e. company data stored in a vector database). The searched documents, being considered as context, are sent to the LLM together with the original query prompt. The LLM will then generate the answer to address the user inquiry, based on information extracted from the searched documents.

The distinction between Step-Back Prompting, RAG, and Chain-of-Thought (CoT) Prompting

Unlike the RAG approach, in Step-back prompting, the LLM doesn’t need an external database – its own knowledge base suffices. This is demonstrated in the Estella Leopold example.

Step-Back Prompting also diverges from the Chain-of-Thought (CoT) prompting proposed last year, which dissects complex problems into smaller reasoning steps (see below screenshot as an example). However, for some problems, such as the Estella Leopold example, it’s not easy to break them down. If a problem can’t be divided into smaller and more manageable steps, CoT can’t provide an accurate answer. Unlike CoT, Step-Back Prompting offers a related question from a higher level, supplementing vital contextual information for the original problem, thereby enabling the LLM to deliver the correct result.

The takeaway: the power of high-quality contextual information

Ultimately, to harness the potential of LLMs for high-quality answers, we must provide high-quality contextual information related to the original question. The Step-Back Prompting method offers a promising way to achieve this, revolutionizing the way we interact with LLMs and paving the way for more accurate, efficient AI problem-solving.