Why Customize Prompts?

Although CrewAI’s default prompts work well for many scenarios, low-level customization opens the door to significantly more flexible and powerful agent behavior. Here’s why you might want to take advantage of this deeper control:

  1. Optimize for specific LLMs – Different models (such as GPT-4, Claude, or Llama) thrive with prompt formats tailored to their unique architectures.
  2. Change the language – Build agents that operate exclusively in languages beyond English, handling nuances with precision.
  3. Specialize for complex domains – Adapt prompts for highly specialized industries like healthcare, finance, or legal.
  4. Adjust tone and style – Make agents more formal, casual, creative, or analytical.
  5. Support super custom use cases – Utilize advanced prompt structures and formatting to meet intricate, project-specific requirements.

This guide explores how to tap into CrewAI’s prompts at a lower level, giving you fine-grained control over how agents think and interact.

Understanding CrewAI’s Prompt System

Under the hood, CrewAI employs a modular prompt system that you can customize extensively:

  • Agent templates – Govern each agent’s approach to their assigned role.
  • Prompt slices – Control specialized behaviors such as tasks, tool usage, and output structure.
  • Error handling – Direct how agents respond to failures, exceptions, or timeouts.
  • Tool-specific prompts – Define detailed instructions for how tools are invoked or utilized.

Check out the original prompt templates in CrewAI’s repository to see how these elements are organized. From there, you can override or adapt them as needed to unlock advanced behaviors.

Understanding Default System Instructions

Production Transparency Issue: CrewAI automatically injects default instructions into your prompts that you might not be aware of. This section explains what’s happening under the hood and how to gain full control.

When you define an agent with role, goal, and backstory, CrewAI automatically adds additional system instructions that control formatting and behavior. Understanding these default injections is crucial for production systems where you need full prompt transparency.

What CrewAI Automatically Injects

Based on your agent configuration, CrewAI adds different default instructions:

For Agents Without Tools

"I MUST use these formats, my job depends on it!"

For Agents With Tools

"IMPORTANT: Use the following format in your response:

Thought: you should always think about what to do
Action: the action to take, only one name of [tool_names]
Action Input: the input to the action, just a simple JSON object...

For Structured Outputs (JSON/Pydantic)

"Ensure your final answer contains only the content in the following format: {output_format}
Ensure the final output does not include any code block markers like ```json or ```python."

Viewing the Complete System Prompt

To see exactly what prompt is being sent to your LLM, you can inspect the generated prompt:

from crewai import Agent, Crew, Task
from crewai.utilities.prompts import Prompts

# Create your agent
agent = Agent(
    role="Data Analyst",
    goal="Analyze data and provide insights",
    backstory="You are an expert data analyst with 10 years of experience.",
    verbose=True
)

# Create a sample task
task = Task(
    description="Analyze the sales data and identify trends",
    expected_output="A detailed analysis with key insights and trends",
    agent=agent
)

# Create the prompt generator
prompt_generator = Prompts(
    agent=agent,
    has_tools=len(agent.tools) > 0,
    use_system_prompt=agent.use_system_prompt
)

# Generate and inspect the actual prompt
generated_prompt = prompt_generator.task_execution()

# Print the complete system prompt that will be sent to the LLM
if "system" in generated_prompt:
    print("=== SYSTEM PROMPT ===")
    print(generated_prompt["system"])
    print("\n=== USER PROMPT ===")
    print(generated_prompt["user"])
else:
    print("=== COMPLETE PROMPT ===")
    print(generated_prompt["prompt"])

# You can also see how the task description gets formatted
print("\n=== TASK CONTEXT ===")
print(f"Task Description: {task.description}")
print(f"Expected Output: {task.expected_output}")

Overriding Default Instructions

You have several options to gain full control over the prompts:

from crewai import Agent

# Define your own system template without default instructions
custom_system_template = """You are {role}. {backstory}
Your goal is: {goal}

Respond naturally and conversationally. Focus on providing helpful, accurate information."""

custom_prompt_template = """Task: {input}

Please complete this task thoughtfully."""

agent = Agent(
    role="Research Assistant", 
    goal="Help users find accurate information",
    backstory="You are a helpful research assistant.",
    system_template=custom_system_template,
    prompt_template=custom_prompt_template,
    use_system_prompt=True  # Use separate system/user messages
)

Option 2: Custom Prompt File

Create a custom_prompts.json file to override specific prompt slices:

{
  "slices": {
    "no_tools": "\nProvide your best answer in a natural, conversational way.",
    "tools": "\nYou have access to these tools: {tools}\n\nUse them when helpful, but respond naturally.",
    "formatted_task_instructions": "Format your response as: {output_format}"
  }
}

Then use it in your crew:

crew = Crew(
    agents=[agent],
    tasks=[task],
    prompt_file="custom_prompts.json",
    verbose=True
)

Option 3: Disable System Prompts for o1 Models

agent = Agent(
    role="Analyst",
    goal="Analyze data", 
    backstory="Expert analyst",
    use_system_prompt=False  # Disables system prompt separation
)

Debugging with Observability Tools

For production transparency, integrate with observability platforms to monitor all prompts and LLM interactions. This allows you to see exactly what prompts (including default instructions) are being sent to your LLMs.

See our Observability documentation for detailed integration guides with various platforms including Langfuse, MLflow, Weights & Biases, and custom logging solutions.

Best Practices for Production

  1. Always inspect generated prompts before deploying to production
  2. Use custom templates when you need full control over prompt content
  3. Integrate observability tools for ongoing prompt monitoring (see Observability docs)
  4. Test with different LLMs as default instructions may work differently across models
  5. Document your prompt customizations for team transparency

The default instructions exist to ensure consistent agent behavior, but they can interfere with domain-specific requirements. Use the customization options above to maintain full control over your agent’s behavior in production systems.

Best Practices for Managing Prompt Files

When engaging in low-level prompt customization, follow these guidelines to keep things organized and maintainable:

  1. Keep files separate – Store your customized prompts in dedicated JSON files outside your main codebase.
  2. Version control – Track changes within your repository, ensuring clear documentation of prompt adjustments over time.
  3. Organize by model or language – Use naming schemes like prompts_llama.json or prompts_es.json to quickly identify specialized configurations.
  4. Document changes – Provide comments or maintain a README detailing the purpose and scope of your customizations.
  5. Minimize alterations – Only override the specific slices you genuinely need to adjust, keeping default functionality intact for everything else.

The Simplest Way to Customize Prompts

One straightforward approach is to create a JSON file for the prompts you want to override and then point your Crew at that file:

  1. Craft a JSON file with your updated prompt slices.
  2. Reference that file via the prompt_file parameter in your Crew.

CrewAI then merges your customizations with the defaults, so you don’t have to redefine every prompt. Here’s how:

Example: Basic Prompt Customization

Create a custom_prompts.json file with the prompts you want to modify. Ensure you list all top-level prompts it should contain, not just your changes:

{
  "slices": {
    "format": "When responding, follow this structure:\n\nTHOUGHTS: Your step-by-step thinking\nACTION: Any tool you're using\nRESULT: Your final answer or conclusion"
  }
}

Then integrate it like so:

from crewai import Agent, Crew, Task, Process

# Create agents and tasks as normal
researcher = Agent(
    role="Research Specialist",
    goal="Find information on quantum computing",
    backstory="You are a quantum physics expert",
    verbose=True
)

research_task = Task(
    description="Research quantum computing applications",
    expected_output="A summary of practical applications",
    agent=researcher
)

# Create a crew with your custom prompt file
crew = Crew(
    agents=[researcher],
    tasks=[research_task],
    prompt_file="path/to/custom_prompts.json",
    verbose=True
)

# Run the crew
result = crew.kickoff()

With these few edits, you gain low-level control over how your agents communicate and solve tasks.

Optimizing for Specific Models

Different models thrive on differently structured prompts. Making deeper adjustments can significantly boost performance by aligning your prompts with a model’s nuances.

Example: Llama 3.3 Prompting Template

For instance, when dealing with Meta’s Llama 3.3, deeper-level customization may reflect the recommended structure described at: https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#prompt-template

Here’s an example to highlight how you might fine-tune an Agent to leverage Llama 3.3 in code:

from crewai import Agent, Crew, Task, Process
from crewai_tools import DirectoryReadTool, FileReadTool

# Define templates for system, user (prompt), and assistant (response) messages
system_template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ .System }}<|eot_id|>"""
prompt_template = """<|start_header_id|>user<|end_header_id|>{{ .Prompt }}<|eot_id|>"""
response_template = """<|start_header_id|>assistant<|end_header_id|>{{ .Response }}<|eot_id|>"""

# Create an Agent using Llama-specific layouts
principal_engineer = Agent(
    role="Principal Engineer",
    goal="Oversee AI architecture and make high-level decisions",
    backstory="You are the lead engineer responsible for critical AI systems",
    verbose=True,
    llm="groq/llama-3.3-70b-versatile",  # Using the Llama 3 model
    system_template=system_template,
    prompt_template=prompt_template,
    response_template=response_template,
    tools=[DirectoryReadTool(), FileReadTool()]
)

# Define a sample task
engineering_task = Task(
    description="Review AI implementation files for potential improvements",
    expected_output="A summary of key findings and recommendations",
    agent=principal_engineer
)

# Create a Crew for the task
llama_crew = Crew(
    agents=[principal_engineer],
    tasks=[engineering_task],
    process=Process.sequential,
    verbose=True
)

# Execute the crew
result = llama_crew.kickoff()
print(result.raw)

Through this deeper configuration, you can exercise comprehensive, low-level control over your Llama-based workflows without needing a separate JSON file.

Conclusion

Low-level prompt customization in CrewAI opens the door to super custom, complex use cases. By establishing well-organized prompt files (or direct inline templates), you can accommodate various models, languages, and specialized domains. This level of flexibility ensures you can craft precisely the AI behavior you need, all while knowing CrewAI still provides reliable defaults when you don’t override them.

You now have the foundation for advanced prompt customizations in CrewAI. Whether you’re adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.