Introduction to Memory Systems in CrewAI

The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.

Memory System Components

ComponentDescription
Short-Term MemoryTemporarily stores recent interactions and outcomes using RAG, enabling agents to recall and utilize information relevant to their current context during the current executions.
Long-Term MemoryPreserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time.
Entity MemoryCaptures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses RAG for storing entity information.
Contextual MemoryMaintains the context of interactions by combining ShortTermMemory, LongTermMemory, and EntityMemory, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation.
External MemoryEnables integration with external memory systems and providers (like Mem0), allowing for specialized memory storage and retrieval across different applications. Supports custom storage implementations for flexible memory management.
User Memory⚠️ DEPRECATED: This component is deprecated and will be removed in a future version. Please use External Memory instead.

How Memory Systems Empower Agents

  1. Contextual Awareness: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.

  2. Experience Accumulation: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.

  3. Entity Understanding: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.

Implementing Memory in Your Crew

When configuring a crew, you can enable and customize each memory component to suit the crew’s objectives and the nature of tasks it will perform. By default, the memory system is disabled, and you can ensure it is active by setting memory=True in the crew configuration. The memory will use OpenAI embeddings by default, but you can change it by setting embedder to a different model. It’s also possible to initialize the memory instance with your own instance.

The ‘embedder’ only applies to Short-Term Memory which uses Chroma for RAG. The Long-Term Memory uses SQLite3 to store task results. Currently, there is no way to override these storage implementations. The data storage files are saved into a platform-specific location found using the appdirs package, and the name of the project can be overridden using the CREWAI_STORAGE_DIR environment variable.

Example: Configuring Memory for a Crew

Code
from crewai import Crew, Agent, Task, Process

# Assemble your crew with memory capabilities
my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True
)

Example: Use Custom Memory Instances e.g FAISS as the VectorDB

Code
from crewai import Crew, Process
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
from typing import List, Optional

# Assemble your crew with memory capabilities
my_crew: Crew = Crew(
    agents = [...],
    tasks = [...],
    process = Process.sequential,
    memory = True,
    # Long-term memory for persistent storage across sessions
    long_term_memory = LongTermMemory(
        storage=LTMSQLiteStorage(
            db_path="/my_crew1/long_term_memory_storage.db"
        )
    ),
    # Short-term memory for current context using RAG
    short_term_memory = ShortTermMemory(
        storage = RAGStorage(
                embedder_config={
                    "provider": "openai",
                    "config": {
                        "model": 'text-embedding-3-small'
                    }
                },
                type="short_term",
                path="/my_crew1/"
            )
        ),
    ),
    # Entity memory for tracking key information about entities
    entity_memory = EntityMemory(
        storage=RAGStorage(
            embedder_config={
                "provider": "openai",
                "config": {
                    "model": 'text-embedding-3-small'
                }
            },
            type="short_term",
            path="/my_crew1/"
        )
    ),
    verbose=True,
)

Security Considerations

When configuring memory storage:

  • Use environment variables for storage paths (e.g., CREWAI_STORAGE_DIR)
  • Never hardcode sensitive information like database credentials
  • Consider access permissions for storage directories
  • Use relative paths when possible to maintain portability

Example using environment variables:

import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage

# Configure storage path using environment variable
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
crew = Crew(
    memory=True,
    long_term_memory=LongTermMemory(
        storage=LTMSQLiteStorage(
            db_path="{storage_path}/memory.db".format(storage_path=storage_path)
        )
    )
)

Configuration Examples

Basic Memory Configuration

from crewai import Crew
from crewai.memory import LongTermMemory

# Simple memory configuration
crew = Crew(memory=True)  # Uses default storage locations

Custom Storage Configuration

from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage

# Configure custom storage paths
crew = Crew(
    memory=True,
    long_term_memory=LongTermMemory(
        storage=LTMSQLiteStorage(db_path="./memory.db")
    )
)

Integrating Mem0 for Enhanced User Memory

Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences.

Using Mem0 API platform

To include user-specific memory you can get your API key here and refer the docs for adding user preferences. In this case user_memory is set to MemoryClient from mem0.

Code
import os
from crewai import Crew, Process
from mem0 import MemoryClient

# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"

# Step 1: Create a Crew with User Memory

crew = Crew(
    agents=[...],
    tasks=[...],
    verbose=True,
    process=Process.sequential,
    memory=True,
    memory_config={
        "provider": "mem0",
        "config": {"user_id": "john"},
        "user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
    },
)

Additional Memory Configuration Options

If you want to access a specific organization and project, you can set the org_id and project_id parameters in the memory configuration.

Code
from crewai import Crew

crew = Crew(
    agents=[...],
    tasks=[...],
    verbose=True,
    memory=True,
    memory_config={
        "provider": "mem0",
        "config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
        "user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
    },
)

Using Local Mem0 memory

If you want to use local mem0 memory, with a custom configuration, you can set a parameter local_mem0_config in the config itself. If both os environment key is set and local_mem0_config is given, the API platform takes higher priority over the local configuration. Check this mem0 local configuration docs for more understanding. In this case user_memory is set to Memory from mem0.

Code
from crewai import Crew


#local mem0 config
config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333
        }
    },
    "llm": {
        "provider": "openai",
        "config": {
            "api_key": "your-api-key",
            "model": "gpt-4"
        }
    },
    "embedder": {
        "provider": "openai",
        "config": {
            "api_key": "your-api-key",
            "model": "text-embedding-3-small"
        }
    },
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": "neo4j+s://your-instance",
            "username": "neo4j",
            "password": "password"
        }
    },
    "history_db_path": "/path/to/history.db",
    "version": "v1.1",
    "custom_fact_extraction_prompt": "Optional custom prompt for fact extraction for memory",
    "custom_update_memory_prompt": "Optional custom prompt for update memory"
}

crew = Crew(
    agents=[...],
    tasks=[...],
    verbose=True,
    memory=True,
    memory_config={
        "provider": "mem0",
        "config": {"user_id": "john", 'local_mem0_config': config},
        "user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
    },
)

Using External Memory

External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications.

Basic Usage with Mem0

The most common way to use External Memory is with Mem0 as the provider:

from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory

agent = Agent(
    role="You are a helpful assistant",
    goal="Plan a vacation for the user",
    backstory="You are a helpful assistant that can plan a vacation for the user",
    verbose=True,
)
task = Task(
    description="Give things related to the user's vacation",
    expected_output="A plan for the vacation",
    agent=agent,
)

crew = Crew(
    agents=[agent],
    tasks=[task],
    verbose=True,
    process=Process.sequential,
    memory=True,
    external_memory=ExternalMemory(
        embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration
    ),
)

crew.kickoff(
    inputs={"question": "which destination is better for a beach vacation?"}
)

Using External Memory with Custom Storage

You can also create custom storage implementations for External Memory. Here’s an example of how to create a custom storage:

from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.storage.interface import Storage


class CustomStorage(Storage):
    def __init__(self):
        self.memories = []

    def save(self, value, metadata=None, agent=None):
        self.memories.append({"value": value, "metadata": metadata, "agent": agent})

    def search(self, query, limit=10, score_threshold=0.5):
        # Implement your search logic here
        return []

    def reset(self):
        self.memories = []


# Create external memory with custom storage
external_memory = ExternalMemory(
    storage=CustomStorage(),
    embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}},
)

agent = Agent(
    role="You are a helpful assistant",
    goal="Plan a vacation for the user",
    backstory="You are a helpful assistant that can plan a vacation for the user",
    verbose=True,
)
task = Task(
    description="Give things related to the user's vacation",
    expected_output="A plan for the vacation",
    agent=agent,
)

crew = Crew(
    agents=[agent],
    tasks=[task],
    verbose=True,
    process=Process.sequential,
    memory=True,
    external_memory=external_memory,
)

crew.kickoff(
    inputs={"question": "which destination is better for a beach vacation?"}
)

Additional Embedding Providers

Using OpenAI embeddings (already default)

Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "openai",
        "config": {
            "model": 'text-embedding-3-small'
        }
    }
)

Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.

Example:

Code
from crewai import Crew, Agent, Task, Process
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "openai",
        "config": {
            "model": 'text-embedding-3-small'
        }
    }
)

Using Ollama embeddings

Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "ollama",
        "config": {
            "model": "mxbai-embed-large"
        }
    }
)

Using Google AI embeddings

Prerequisites

Before using Google AI embeddings, ensure you have:

  • Access to the Gemini API
  • The necessary API keys and permissions

You will need to update your pyproject.toml dependencies:

dependencies = [
    "google-generativeai>=0.8.4", #main version in January/2025 - crewai v.0.100.0 and crewai-tools 0.33.0
    "crewai[tools]>=0.100.0,<1.0.0"
]
Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "google",
        "config": {
            "api_key": "<YOUR_API_KEY>",
            "model": "<model_name>"
        }
    }
)

Using Azure OpenAI embeddings

Code
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "openai",
        "config": {
            "api_key": "YOUR_API_KEY",
            "api_base": "YOUR_API_BASE_PATH",
            "api_version": "YOUR_API_VERSION",
            "model_name": 'text-embedding-3-small'
        }
    }
)

Using Vertex AI embeddings

Code
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "vertexai",
        "config": {
            "project_id"="YOUR_PROJECT_ID",
            "region"="YOUR_REGION",
            "api_key"="YOUR_API_KEY",
            "model_name"="textembedding-gecko"
        }
    }
)

Using Cohere embeddings

Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "cohere",
        "config": {
            "api_key": "YOUR_API_KEY",
            "model": "<model_name>"
        }
    }
)

Using VoyageAI embeddings

Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "voyageai",
        "config": {
            "api_key": "YOUR_API_KEY",
            "model": "<model_name>"
        }
    }
)

Using HuggingFace embeddings

Code
from crewai import Crew, Agent, Task, Process

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "huggingface",
        "config": {
            "api_url": "<api_url>",
        }
    }
)

Using Watson embeddings

Code
from crewai import Crew, Agent, Task, Process

# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "watson",
        "config": {
            "model": "<model_name>",
            "api_url": "<api_url>",
            "api_key": "<YOUR_API_KEY>",
            "project_id": "<YOUR_PROJECT_ID>",
        }
    }
)

Using Amazon Bedrock embeddings

Code
# Note: Ensure you have installed `boto3` for Bedrock embeddings to work.

import os
import boto3
from crewai import Crew, Agent, Task, Process

boto3_session = boto3.Session(
    region_name=os.environ.get("AWS_REGION_NAME"),
    aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
    aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
)

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    embedder={
    "provider": "bedrock",
        "config":{
            "session": boto3_session,
            "model": "amazon.titan-embed-text-v2:0",
            "vector_dimension": 1024
        }
    }
    verbose=True
)

Adding Custom Embedding Function

Code
from crewai import Crew, Agent, Task, Process
from chromadb import Documents, EmbeddingFunction, Embeddings

# Create a custom embedding function
class CustomEmbedder(EmbeddingFunction):
    def __call__(self, input: Documents) -> Embeddings:
        # generate embeddings
        return [1, 2, 3] # this is a dummy embedding

my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "custom",
        "config": {
            "embedder": CustomEmbedder()
        }
    }
)

Resetting Memory via cli

crewai reset-memories [OPTIONS]

Resetting Memory Options

OptionDescriptionTypeDefault
-l, --longReset LONG TERM memory.Flag (boolean)False
-s, --shortReset SHORT TERM memory.Flag (boolean)False
-e, --entitiesReset ENTITIES memory.Flag (boolean)False
-k, --kickoff-outputsReset LATEST KICKOFF TASK OUTPUTS.Flag (boolean)False
-kn, --knowledgeReset KNOWLEDEGE storageFlag (boolean)False
-a, --allReset ALL memories.Flag (boolean)False

Note: To use the cli command you need to have your crew in a file called crew.py in the same directory.

Resetting Memory via crew object


my_crew = Crew(
    agents=[...],
    tasks=[...],
    process=Process.sequential,
    memory=True,
    verbose=True,
    embedder={
        "provider": "custom",
        "config": {
            "embedder": CustomEmbedder()
        }
    }
)

my_crew.reset_memories(command_type = 'all') # Resets all the memory

Resetting Memory Options

Command TypeDescription
longReset LONG TERM memory.
shortReset SHORT TERM memory.
entitiesReset ENTITIES memory.
kickoff_outputsReset LATEST KICKOFF TASK OUTPUTS.
knowledgeReset KNOWLEDGE memory.
allReset ALL memories.

Benefits of Using CrewAI’s Memory System

  • 🦾 Adaptive Learning: Crews become more efficient over time, adapting to new information and refining their approach to tasks.
  • 🫡 Enhanced Personalization: Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
  • 🧠 Improved Problem Solving: Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.

Conclusion

Integrating CrewAI’s memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.