! AI - Prompting Tipps

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AI - Advanced Prompting Tipps >

! AI - Prompting Tipps

 
Theo's Prompting-Tipps:
Design the Perfect Prompt for Use with OpenAI Models

 

clip0646

Generally Prompting is easy. Unless you do not get what you want with a simple prompt.

 

## Introduction

Prompting is a procedure in which instructions are provided to a language model with the objective of eliciting a specific type of output. Imagine engaging in a conversation with an intelligent robot that takes your words very literally. It is essential to communicate your intentions in a precise and unambiguous manner.

 

The art of creating a prompt that is well-constructed is immensely important. A thoughtfully designed prompt can be the dividing line between receiving a response that is logical, meaningful, and closely aligned with your expectations, and getting a reply that veers off-course or is downright illogical.

 

This guide is dedicated to acquainting you with the finer aspects of designing prompts. It aims to break down the steps involved and demystify any jargon that you might encounter in the process.

 

A common strategy that is employed when crafting prompts is the SPR, which stands for Single Prompt Response. The premise of employing the SPR method is to maximize efficiency by attempting to achieve the desired result using just one prompt. This approach is favored because it allows for the optimal use of Tokens, which are discrete units of data that the model uses to process input and produce output. By minimizing the number of prompts, more Tokens can be allocated for both the input and the output, which can enhance the quality and relevance of the responses.

 

As an illustration, let’s take a look at a sample prompt that was created for GPT-4, a highly advanced language model. It is important to note that the complexity of the prompts suitable for GPT-4 might be too intricate for earlier versions or other language models, as GPT-4 has been designed with a more sophisticated understanding and processing capacity.

 

 

Lets start with the more or less official "Prompting Guide"

 

## Best Practices for Prompt Design

How do you design the perfect Prompt?

 

Perfect Prompt = [Contex] 

               + [Intention] 

               + [specific Information] 

               + [Sample of Response Format] 

 

# Maximizing the Efficacy of AI Language Models

 

In this chapter, we will delve into various strategies that can be employed to maximize the efficacy of AI language models. These strategies include refining the queries, adopting personas, using delimiters, specifying steps, providing examples, and defining the desired output length.

 

## Including Details in Queries

 

To extract more relevant answers from an AI language model, it is crucial to include details in the query. By providing specifics, such as the context or attributes you are interested in, the language model can generate responses that are more aligned with your requirements. For instance, instead of asking, "What is the capital of a European country?", a more detailed query would be, 
"What is the capital of France?".

 

## Adopting Personas

 

Asking the AI to adopt a persona is an effective way of shaping the responses according to a particular character or role. This can be especially useful in creative writing or role-playing scenarios. For example, you can instruct the model, “Pretend you are a medieval knight,” and the model will craft responses in the tone and language of a knight from the Middle Ages.

 

## Using Delimiters

 

Delimiters are symbols or characters used to separate distinct parts of the input. This is particularly useful when you want the AI to address multiple points or questions in a single input. For example, using a vertical bar ‘|’ between questions: "What is the capital of France? | Tell me a fun fact about the Eiffel Tower." This informs the model that these are two separate queries, and it should provide individual responses to each.

 

## Specifying Steps

 

In scenarios where a task needs to be completed, it is important to specify the steps involved. This ensures that the language model understands the sequence and can provide a structured and comprehensive guide. For instance, if you’re asking for instructions on baking a cake, specify that you need steps from preheating the oven to cooling the cake.

 

## Providing Examples

 

Giving examples can be particularly beneficial in enhancing the clarity of your query. Examples can act as an illustration of what you expect from the AI. For instance, if you ask for a list of programming languages, you can include examples such as "like Python or Java". This will help the AI to understand the context and provide a more relevant list.

 

## Specifying Output Length

 

Sometimes, the information you are seeking may require either a brief summary or an in-depth explanation. By specifying the desired length of the output, you can tailor the AI's response to fit your needs. For instance, if you're seeking a concise definition of a term, you could specify “Please provide a brief explanation.” Conversely, if you need an extensive analysis, you could ask for “a detailed explanation.”

 

## Conclusion

 

When interacting with AI language models, it is essential to craft your queries thoughtfully. By including details, adopting personas, using delimiters, specifying steps, providing examples, and defining the output length, you can significantly enhance the relevance and usefulness of the AI responses. Through the mindful implementation of these strategies, you can unlock the full potential of AI language models in addressing your information needs.

 

 

Some more Tipps:

 

### 1. Use the Latest Model

It is highly advisable to select the most recent and capable models when working with language generation. Generally, newer models are built upon more advanced architectures, which often translate into enhanced performance and the ability to yield superior results.

 

As of November 2022, the model `text-davinci-003` is particularly endorsed for tasks involving text generation. For tasks that involve generating code, the `code-davinci-002` model is recommended. These models are part of OpenAI’s lineup and are designed for specialized applications; `text-davinci-003` for natural language processing and `code-davinci-002` for coding-related tasks.

 

Moreover, once GPT-4 becomes accessible, it is strongly encouraged to employ this model for tasks that are complex in nature. GPT-4 is expected to be a more advanced iteration compared to its predecessors, and is likely to have a deeper understanding and processing capabilities. Utilizing GPT-4 could potentially result in higher quality outputs, especially for intricate and demanding tasks.

 

### 2. Structure the Prompt Properly

When crafting a prompt for a language model, it is imperative to structure it in a way that precisely communicates your intentions. One effective method is to position the instructions at the very start of the prompt. Following the instructions, you can employ a distinct separator, such as ### or """, before presenting any contextual information. This strategy serves to create a clear demarcation between the instructions and the context, which can enhance the clarity and comprehensibility of the prompt for the language model.

 

**Example:**

 

Less Effective:

 

Summarize the text below as a bullet point list of the most important points.

{text input here}

 

Better:

 

Summarize the text below as a bullet point list of the most important points.

Text: """

{text input here}

"""

This approach can be particularly beneficial in ensuring that the language model recognizes and understands the task it is meant to perform. The separator acts as a signal to the model that the instructions have concluded and the context or content to be processed is commencing. It helps in mitigating any potential confusion or ambiguity for the model and can contribute to achieving more accurate and relevant outputs.

 

 

### 3. Be Specific and Detailed

Provide specific and detailed instructions encompassing aspects such as the desired context, expected outcome, preferred length, format, style, and other relevant parameters. Being meticulous and explicit in your instructions is instrumental in steering the model's response towards alignment with your anticipations. Visualize the AI navigating through a multi-dimensional space, particularly a 5-dimensional one, employing the words and guidelines you furnish. The precision and specificity of your instructions are akin to coordinates that guide the AI to a particular position within this space. The more refined and unambiguous these coordinates are, the more adeptly the AI can pinpoint and assume the optimal position that corresponds to the output you seek.

 

In simpler terms, think of the AI as a navigator in a 5-dimensional space where words and instructions are its map and compass. By providing specific instructions regarding the context, outcome, length, format, and style, you are essentially giving the AI a more detailed map and a more accurate compass. This allows the AI to navigate the space with precision and arrive exactly where you want it to be, resulting in an output that closely matches your expectations.

**Example:**

 

Less Effective:

 

Translate this text.

 

Better:

 

Translate the following text for use in a technical manual, but keep it easy to understand. 
Explain difficult words and write in the style of the famous poet Goethe.

 

 

### 4. State your Intent

Articulate explicitly the intended application of the output. By conveying how the result will be utilized, you equip the AI with the insights needed to tailor the output in the requisite format. For instance, you can instruct the AI with a directive such as "the text should be crafted for inclusion in a manual, and it is imperative that it spans between 1000 and 3000 characters in length."

 

This directive serves a dual purpose. Firstly, by stating that the text is for a manual, you are giving the AI context about the tone, style, and structure it should adopt. Manuals typically require clear, concise, and instructional language. Secondly, by specifying a character range, you are establishing clear boundaries within which the AI must operate. This ensures that the output is neither too brief to be informative nor too extensive to be unwieldy.

 

In essence, providing the AI with information about the intended use and specific criteria like length helps in refining the output. This kind of guidance acts like a mold, shaping the AI's response to fit the desired purpose and format, thus ensuring that the final product is both fit for purpose and meets the defined specifications.

 

**Example:**

 

Less Effective:

 

Rewrite the following Text: 

### Text ###

 

Better:

 

Rewrite the following text for use in a technical manual. Keep the result easy to understand and in english language.
Explain diffcult to understand words and keep the result below 4000 Characters.

 

### 5. Avoid typing mistakes and try to move the most important words are at the start of the Prompt.

When working with commands for image generation, such as AIC.Generate Image Commands, it is particularly crucial to prioritize the placement of the most significant words at the beginning of the prompt. Envision the AI as an explorer maneuvering through a 5-dimensional space, where the words you provide serve as its navigational tools. In the context of generating images from text prompts, the positioning of words takes on heightened importance, as it dictates the focal elements of the resulting image. The principal subject or motive should be front and center in the prompt, essentially serving as the guiding star for the AI's creative journey. Additionally, adhering to perfect grammar is not a strict necessity, especially in "Text to Image" prompts. Instead, the emphasis should be on positioning the central themes at the outset of the prompt.

 

For example, let's say you want to create an image of a serene lake surrounded by mountains with a sunset in the background.

 

**Example:**

 

Less Effective:

 

"A beautiful scene where the golden sun is reflected in the calm waters, which are surrounded by towering peaks, depicting a serene lake with mountains and a sunset in the background."

 

Better:

 

"Serene lake, mountains, sunset background. Calm waters reflecting the golden sun, surrounded by towering peaks."

 

In this example, although the description is grammatically well-structured, the main elements "serene lake," "mountains," and "sunset background" are placed at the end. The AI might not give them the prominence they deserve in the image because they are mentioned later in the prompt.

 

In essence, think of the words in the prompt as guiding the AI's focus in that 5-dimensional creative space. Leading with the primary elements ensures that they are given precedence in the AI's rendering of the image.

 

 

### 6. Direct the AI to your Output-Format / Provide Format Examples

Clearly articulate the desired format of the output by providing illustrative examples. Language models often exhibit enhanced responsiveness and accuracy when presented with explicit demonstrations of the format requisites. Moreover, it facilitates the programmatic parsing of multiple outputs with greater ease and efficiency.###

 

By incorporating examples into your instructions, you are essentially providing the AI with a template to follow. This is especially beneficial because language models like GPT are trained on a vast array of text data, and showing them a specific format through an example can significantly narrow down the possibilities and guide them toward producing the desired output.

 

For instance, if you are requesting the AI to generate data in tabular form, you could provide an example like this:

 

"Generate a table with columns 'Name', 'Age', and 'Occupation', similar to the example below:

 

Name       | Age | Occupation

-----------|-----|-----------

John Doe   | 28  | Engineer

Jane Smith | 32  | Doctor

"

 

By providing this example, the AI understands the exact structure and format that you are looking for and can produce outputs that can be easily parsed programmatically.

 

Additionally, examples serve as visual aids that can bridge the gap between abstract instructions and concrete execution. This is particularly useful in cases where the desired format may be complex or unconventional, and a textual description alone may not suffice in conveying the intricacies involved. Through examples, you essentially furnish the AI with a blueprint to emulate, streamlining the process and enhancing the precision of the output.

 

**Example:**

 

Less Effective:

 

Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes.

Text: {text}

 

 

Better:

 

Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themes.

Desired format:

Company names: <comma_separated_list_of_company_names>

People names: -||-

Specific topics: -||-

General themes: -||-

Text: {text}

 

### 7. Ask for follow up questions

Designing the prompt you can ask for follow up questions. This will help you to design the final Prompt "to the Point".

However, when using the SPR this will not work. Because the AI does not remember the earlier dialog unless you add it to the prompt.

Therefore, you can ask for the question, and then just add these answers to the original prompt. This will improve the original prompt and remove the need for these questions,
that will anyway only waste tokens.

 

**Example:**

 

Less Effective:

 

Write the marketing Text.

 

 

Better:

 

Write the marketing Text, and if you need further information to improve it, ask me all needed question, starting with a number so i can answer the questions in reference to this number.

 

Finally answer these questions and add the answers to the original prompt, then send the prompt again. Until you have all needed information together with the original prompt.

 

 

### 8. Start with Zero-Shot, Then Few-Shot, Then Fine-Tune

Begin by providing a single example or instruction (zero-shot), then try providing multiple examples (few-shot). If neither works, you might need to fine-tune the model.

 

**Example of Zero-shot:**

 

Extract keywords from the below text.

Text: {text}

Keywords:

 

 

**Example of Few-shot:**

 

Extract keywords from the corresponding texts below.

Text 1: Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications.

Keywords 1: Stripe, payment processing, APIs, web developers, websites, mobile applications.

Text 2: {text}

Keywords 2:

 

 

### 9. Let AI Fact-Check the result.

This is something that you can do easily with the SPR. Just take the result, and prefix it with: "Please fact Check the following text and tell me all shortcomings or options for improvements". And then ask for the improved version.

 

 

 

### 10.Let independent AI-Persons discuss and rework the results.

It’s often more effective to let multiple AI- Instances work on the same project.

 

**Example:**

 

 

Better:

 

Act as 3 Experts in the field of Scripting. Let them discuss the following code and make all changes to the code 

by their suggestion. When all 3 Experts are satisfied with the Code, show me the result.

 

 

### 8. Use Leading Words for Code Generation

For code generation, use “leading words” to nudge the model toward a particular pattern. For example, adding “import” hints to the model that it should start writing in Python.

 

**Example:**

 

Less Effective:

 

# Write a simple python function that

# 1. Ask me for a number in mile

# 2. It converts miles to kilometers

 

 

Better:

 

# Write a simple python function that

# 1. Ask me for a number in mile

# 2. It converts miles to kilometers

import

 

9. Assign Roles

You can make your prompt creative by assigning a role to the model. For example, you could ask the model to answer questions as if it were an expert in ancient history.

 

10. Use Split Personalities

You can ask the model to simulate a conversation between two or more personalities or characters. For example, you could have a debate between a scientist and a philosopher within the model's responses.

 

11. Use Zero-Shot or Few-Shot Learning

Zero-Shot means giving the model a task with no examples, just the instructions. Few-Shot means providing a few examples to guide the model. Use Few-Shot when Zero-Shot doesn’t give good results.

 

 

*Buzzwords Explained

 

1.Zero-Shot: In a Zero-Shot setting, the language model is presented with a task without receiving any prior examples, and is expected to accomplish it based solely on the information contained in the prompt. This is akin to giving a test on a topic that hasn’t been specifically studied, but relying on related knowledge to answer.

 

2.Few-Shot: Here, the model is provided with a handful of examples within the prompt to steer its responses. These examples serve as a guide, helping the model to understand the context and desired format of the response.

 

3.Fine-Tuning: This involves calibrating the parameters and utilizing specific training data on a pre-trained model to enhance its performance for a tailored task. Essentially, fine-tuning is like honing the skills of an already educated person by providing more specialized training.

 

4.Tokens: Tokens are the discrete units of text that language models process. In English, a token can range from being as brief as a single character to as extensive as a word (e.g., 'a' or 'apple'). Tokens are akin to the building blocks that the model uses to comprehend and generate text.

 

5.Temperature: This parameter modulates the randomness infused into the model's output. Higher temperature values, say closer to 1, result in more varied and creative outputs, whereas lower values, closer to 0, yield more deterministic and focused responses. You can think of temperature as controlling the 'spice level' of the output.

 

6.Max Tokens: This specifies the ceiling on the number of tokens that the model generates in its response. It's like setting a limit on the length of the answer, ensuring it doesn't become overly verbose.

 

7.Stop Sequences: These are characters or strings that serve as a cue for the model to cease generating text. It’s akin to a period at the end of a sentence or a director yelling 'cut' on a film set.

 

8.TopP (Top-k Probability): An additional parameter that is instrumental in controlling the randomness of the output by truncating the options the model considers. Essentially, it limits the choices to the most likely ones, preventing the model from going on tangents.

 

Conclusion

Crafting an impeccable prompt is an art that amalgamates lucid instructions, well-orchestrated structure, and a nuanced comprehension of language model mechanics. By adhering to the best practices delineated in this guide and judiciously employing parameters such as Temperature and TopP, you can sculpt prompts that elicit robust and high-caliber responses from OpenAI models. This is tantamount to having a fluent conversation with an intelligent, resourceful, but literal-minded collaborator. You need to know how to ask to get what you want.