how to

Create a prompt for your AI agents

AI agents do not automatically know how they should behave, what tasks they should perform, or how they should respond. To operate correctly, they need clear instructions. A prompt is the set of instructions that guides how your agent works. Well-designed prompts help ensure that the agent produces consistent, reliable, and useful results aligned with your workflow or application.

Prompt elements

In most cases, a prompt should clearly define the following elements (they may vary depending on the goal of the agent):

  • The role of the agent
  • The task it should perform
  • The steps the agent should follow
  • The rules the agent must follow and what the agent must NOT do
  • The knowledge or guidelines the agent should use
  • The type of input it will receive
  • The structure of the output it should produce

Before writing your prompt, make sure you can answer the following questions:

  • What is this agent?
    Define its main responsibility.
  • What should it do?
    Describe the actions it should perform and the steps required to complete the task.
  • What rules must it follow?
    Define boundaries, safety requirements, and limitations.
    Clearly state what the agent must do and what it must NOT do.
  • What knowledge should it use?
    Specify the guidelines, documentation, policies, or reference materials the agent should rely on.
  • What information will it receive?
    Describe the input the agent will analyze and any relevant details about the structure or possible limitations of that data.
  • What should it produce?
    Define the expected output format and the level of detail required in the response.

Below is a simplified example of a complete prompt. You can adapt the structure and instructions based on the type of agent you want to create.

Example: Clinical Risk Review Agent

InstructionExample
Agent roleYou are an AI assistant responsible for reviewing patient health reports and identifying potential clinical risk signals. Your role is to support clinical review by analyzing the information provided and highlighting relevant findings.
Agent taskAnalyze the patient report and identify relevant clinical signals that may require attention from a healthcare professional.
Steps the agent should followStep 1: Extract key information from the report (symptoms, medical history, medications, and relevant clinical signals).
Step 2: Identify potential risk indicators based on the available information.
Step 3: Evaluate the findings using the clinical guidelines provided in the knowledge base.Step 4: Generate a structured summary of the findings.
Rules the agent must followFollow the clinical guidelines provided in the knowledge base.
Clearly explain the reasoning behind detected risk signals.
Indicate when information is missing or unclear.
What the agent must NOT doDo NOT provide medical diagnoses.
Do NOT invent or assume information that is not present in the input.
Do NOT provide recommendations that are not supported by the available information or guidelines.
Knowledge sourcesUse the clinical guideline documents available in the knowledge base to support the analysis and reasoning.
Input descriptionThe input may include patient-reported symptoms, medical history, medications, allergies, and additional notes written in free text. Some information may be incomplete or missing.
Output formatProvide the response using the following structure:
• Risk level
• Detected signals
• Recommended action
• Explanation summary

How to create an effective prompt from scratch

When creating one or multiple agents, the prompt defines the logic that guides how each agent processes information and produces outputs. Because agents often operate as part of a larger workflow, it is important to understand the overall scenario before defining the instructions.

Before writing your prompt, consider the problem you want to solve, the information the agent will receive, and how its output will be used. Use the following steps to structure your prompt and ensure each agent has the right instructions to perform its function.

Quick checklist (summary)

We’ll describe each step in more detail, but here is a quick checklist of what your prompt should answer:

  1. What is this agent? What is it responsible for?
  2. What should it do? What actions should it perform?
  3. What rules must it follow? Boundaries, safety, and limitations.
  4. What knowledge should it use? Guidelines, documentation, policies.
  5. What information will it receive? The input it will analyze.
  6. What should it produce? Output format and level of detail.
  7. Does it work? Test, review, and refine.

1. Define the agent’s role 

Start by clearly defining the role and responsibility of the agent.

This section should explain what the agent is, its purpose, and its position within the workflow. A well-defined role helps the agent understand the context in which it operates and prevents it from performing tasks outside its intended scope.

When writing this section, describe:

  • The purpose of the agent
  • The type of problem it helps solve
  • Who will use the results produced by the agent
  • Whether it is part of a larger multi-agent process

The role acts as the foundation for the rest of the prompt.

2. Define the agent’s task

Next, explain what the agent should actually do with the information it receives. The task should describe the actions the agent must perform, such as analyzing information, extracting relevant details, identifying patterns, or generating summaries.

When defining the task, clarify:

  • The type of analysis or processing the agent should perform
  • The steps it should follow when interpreting the input
  • The objective of the analysis or output

Clear task definitions help guide the agent’s reasoning process and reduce ambiguous responses.

3. Define rules and constraints

Rules and constraints guide how the agent should behave and help prevent incorrect or unsafe outputs. This section establishes the boundaries of the agent’s behavior.

Include instructions that define:

  • What the agent is allowed to do
  • What the agent should avoid doing
  • How the agent should handle uncertainty or missing information
  • Any policies, guidelines, or limitations the agent must follow

These rules help ensure the agent produces reliable and appropriate responses.

4. Add context or knowledge sources

If the agent should rely on specific knowledge or reference materials, include them in the prompt or connect them through the platform. Providing context helps the agent align its reasoning with trusted sources and improves consistency.

Context may include:

  • Clinical or operational guidelines
  • Internal company policies
  • Documentation or manuals
  • Research evidence or reference materials

Providing clear context improves the accuracy and consistency of the agent’s responses.

5. Define the input the agent will receive

Describe the type of information the agent will analyze. This helps the agent understand what data it should expect and how to interpret it. Inputs may be structured data, free text, documents, or combinations of different sources.

When defining the input, describe:

  • The type of information provided to the agent
  • The possible structure of the data
  • The key elements the agent should pay attention to
  • Whether some fields may be missing or incomplete

Providing a clear description of the expected input helps the agent interpret the information correctly.

6. Define the output format

Clearly defining the output format helps ensure the agent produces consistent and structured responses. Without formatting instructions, the agent may generate responses that vary significantly between executions.

When defining the output, specify:

  • The structure of the response
  • The sections or fields that must be included
  • The level of detail expected in each section
  • Whether the output should follow a structured format, such as lists, sections, or JSON

Structured outputs are easier to review, automate, and integrate into workflows.

7. Test the prompt

After writing the prompt, test the agent with different types of inputs to evaluate how it behaves. Testing helps identify unclear instructions or areas where the prompt needs improvement.

When testing your prompt, try:

  • Typical cases that represent normal usage
  • Edge cases that challenge the agent’s logic
  • Inputs with incomplete information
  • Unexpected or ambiguous inputs

Review the outputs carefully and refine the prompt as needed. Prompt design is usually an iterative process, and improvements often come from testing and adjusting the instructions over time.

How to fine-tune an existing prompt

After creating the first version of a prompt, you may need to refine it to improve clarity, accuracy, and consistency. Prompt design is usually an iterative process, where small adjustments can significantly improve the agent’s performance.

Fine-tuning a prompt involves reviewing how the agent behaves, identifying weak outputs, and adjusting the instructions to guide the agent more clearly.

Quick checklist (summary)

When improving an existing prompt, review the following questions:

  1. Are the instructions clear enough?
  2. Is the agent producing consistent outputs?
  3. Does the agent follow the expected reasoning process?
  4. Is the output structured in the right way?
  5. Does the agent behave correctly when information is missing or unclear?

Use the steps below to refine and improve your prompt.

1. Identify weak or inconsistent outputs

Start by reviewing the outputs produced by the agent. Look for situations where the agent produces unclear, incomplete, or inconsistent responses. These cases often indicate that the prompt instructions need to be more precise.

Common signs that a prompt needs improvement include:

  • Vague or overly generic responses
  • Missing important information in the output
  • Incorrect prioritization of signals or findings
  • Inconsistent responses for similar inputs

Understanding where the prompt fails helps guide the improvements you should make.

2. Clarify or expand the instructions

Many prompt issues occur because the instructions are too general or ambiguous. If the agent’s behavior is inconsistent, review the role, task description, and rules to ensure they clearly explain what the agent should do.

You may need to:

  • Clarify the objective of the agent’s task
  • Add more detail about how the agent should interpret the input
  • Specify how the agent should prioritize different signals or information

Clear and explicit instructions reduce ambiguity and improve reliability.

3. Improve the agent’s reasoning structure

If the agent produces incomplete or inconsistent analysis, consider adding more structure to the reasoning process. You can guide the agent by describing the steps it should follow when processing the input. This helps the agent analyze information more systematically.

For example, you may instruct the agent to:

  • Extract key information from the input
  • Identify relevant signals or patterns
  • Evaluate them using the provided guidelines
  • Produce the final output based on that analysis

Providing a clearer reasoning structure often improves the quality and consistency of the results.

4. Adjust rules and constraints

If the agent produces responses that go beyond its intended scope, review the rules and constraints defined in the prompt. You may need to strengthen instructions about:

  • What the agent is allowed to do
  • What conclusions it should avoid making
  • How it should handle missing or uncertain information
  • What policies or guidelines it must follow

Well-defined constraints help prevent incorrect or unsafe outputs.

5. Refine the output format

If the agent’s responses vary too much in structure or detail, refine the output format defined in the prompt.

Make sure the prompt clearly specifies:

  • The sections or fields that must appear in the output
  • The level of detail expected in each section
  • The labels, categories, or formats the agent should use

A clearly defined output format helps ensure the results remain consistent and easier to review.

6. Test and compare prompt versions

After adjusting the prompt, test the agent again using different inputs. Comparing outputs from different prompt versions can help you determine whether the changes improved the agent’s behavior.

When testing improvements, try:

  • Typical inputs from real scenarios
  • Edge cases that challenge the agent’s logic
  • Inputs with missing or ambiguous information

Review the results carefully and continue refining the prompt if necessary. Prompt improvement is usually an iterative process, and multiple adjustments may be needed before the agent consistently performs as expected.

Common prompt mistakes

When writing prompts for AI agents, some common issues can lead to unclear instructions or inconsistent results. Recognizing these mistakes can help you quickly identify why an agent may not behave as expected.

Below are some frequent prompt design problems and how they can affect the agent’s behavior.

1. Being too vague

One of the most common issues is providing instructions that are too general. When the prompt lacks clear guidance, the agent may interpret the task in different ways, which can lead to inconsistent or incomplete outputs.

Providing more specific instructions helps the agent understand exactly what it should focus on.

Less effective prompt:Improved prompt:
Analyze the patient data and provide insights.Review the patient information and extract the reported symptoms, current medications, and relevant medical history. Based on this information, identify potential risk signals and summarize them in a structured report.

2. Not defining the output format

If the prompt does not specify how the response should be structured, the agent may generate outputs that vary in format, length, or level of detail. This makes results harder to review and may create inconsistencies across different executions.

Defining the output structure helps ensure consistent and easy-to-review results.

Less effective prompt:Improved prompt:
Analyze the document and explain the findings.Analyze the document and provide the results using the following structure:Key findingsDetected risk signalsRecommended actionSummary explanation

3. Mixing too many tasks in one prompt

If a prompt asks the agent to perform too many unrelated tasks at once, the instructions may become unclear. This can lead to incomplete analysis or outputs that mix different objectives.

Agents usually perform better when they focus on a single, well-defined responsibility. In more complex workflows, it is often more effective to divide the process into multiple agents or LLM steps, where each one performs a specific function. This approach helps keep instructions clear and allows each part of the workflow to produce more reliable results.

Less effective prompt:Improved prompt:
Review the patient report, identify risks, recommend treatment options, summarize the case, and write a message for the patient.Review the patient report and identify potential clinical risk signals. Summarize the findings in a structured report for clinical review.

In this scenario, the workflow could then continue with additional agents. For example, one agent may analyze the patient information, another may generate recommendations, and another may prepare a message for the patient. Dividing tasks across multiple agents helps keep each prompt focused and improves the overall reliability of the workflow.

4. Missing rules or constraints

Without clear rules or boundaries, the agent may make assumptions or generate conclusions that go beyond its intended role. This can lead to unreliable or unsafe outputs.

Adding constraints helps guide the agent’s behavior and prevents unsupported conclusions.

Less effective prompt:Improved prompt:
Review the patient symptoms and determine what condition they might have.Review the patient symptoms and identify potential risk signals. Do not provide medical diagnoses. If the information is insufficient, indicate that additional information is required.

5. Ignoring edge cases

Prompts often work well for typical inputs but fail when information is incomplete or ambiguous. If the prompt does not explain how the agent should behave in these situations, the results may become unreliable.

Preparing the prompt for edge cases helps the agent handle real-world situations more effectively.

Less effective prompt:Improved prompt:
Analyze the input and provide a result.Analyze the input and produce a structured result. If important information is missing or unclear, explicitly indicate that the available data is insufficient to complete the analysis.