TeleMem: Building Long-Term and Multimodal Memory for Agentic AI
If you find this project helpful, please give us a βοΈ on GitHub for the latest update.
π€ Contributions welcome! Feel free to open an issue or submit a pull request.
TeleMem is an agent memory management layer that can be used as a high-performance drop-in replacement for Mem0 with one line of code (import telemem as mem0), deeply optimized for complex scenarios involving multi-turn dialogues, character modeling, long-term information storage, and semantic retrieval.
Through its unique context-aware enhancement mechanism, TeleMem provides conversational AI with core infrastructure offering higher accuracy, faster performance, and stronger character memory capabilities.
Building upon this foundation, TeleMem implements video understanding, multimodal reasoning, and visual question answering capabilities. Through a complete pipeline of video frame extraction, caption generation, and vector database construction, AI Agents can effortlessly store, retrieve, and reason over video content just like handling text memories.
The ultimate goal of the TeleMem project is to use an agent's hindsight to improve its foresignt.
TeleMem, where memory lives on and intelligence grows strong.
π’ Latest Updates
- [2026-01-28] π TeleMem v1.3.0 has been released!
- [2026-01-22] π TeleMem Tech Report has been updated to its 4th version!
- [2026-01-13] π TeleMem Tech Report has been released on arXiv!
- [2026-01-09] π TeleMem v1.2.0 has been released!
- [2025-12-31] π TeleMem v1.1.0 has been released!
- [2025-12-05] π TeleMem v1.0.0 has been released!
π₯ Research Highlights
- Significantly improved memory accuracy: Achieved 86.33% accuracy on the ZH-4O Chinese multi-character long-dialogue benchmark, 19% higher than Mem0.
- Doubled speed performance: Millisecond-level semantic retrieval enabled by efficient buffering and batch writing.
- Greatly reduced token cost: Optimized token usage delivers the same performance with significantly lower LLM overhead.
- Precise character memory preservation: Automatically builds independent memory profiles for each character, eliminating confusion.
- Automated Video Processing Pipeline: From raw video β frame extraction β caption generation β vector database, fully automated
- ReAct-Style Video QA: Multi-step reasoning + tool calling for precise video content understanding
π Table of Contents
- Project Introduction
- TeleMem vs Mem0: Core Advantages
- Experimental Results
- Quick Start
- Project Structure
- Core Functions
- Multimodal Extensions
- Data Storage Explanation
- Development and Contribution
- Acknowledgements
Project Introduction
TeleMem enables conversational AI to maintain stable, natural, and continuous worldviews and character settings during long-term interactions through a deeply optimized pipeline of character-aware summarization β semantic clustering deduplication β efficient storage β precise retrieval.
Features
- Automatic memory extraction: Extracts and structures key facts from dialogues.
- Semantic clustering & deduplication: Uses LLMs to semantically merge similar memories, reducing conflicts and improving consistency.
- Character-profiled memory management: Builds independent memory archives for each character in a dialogue, ensuring precise isolation and personalized management.
- Efficient asynchronous writing: Employs a buffer + batch-flush mechanism for high-performance, stable persistence.
- Precise semantic retrieval: Combines FAISS + JSON dual storage for fast recall and human-readable auditability.
Applicable Scenarios
-
Multi-character virtual agent systems
-
Long-memory AI assistants (e.g., customer service, companionship, creative co-pilots)
-
Complex narrative/world-building in virtual environments
-
Dialogue scenarios with strong contextual dependencies
-
Video content QA and reasoning
-
Multimodal agent memory management
-
Long video understanding and information retrieval
TeleMem vs Mem0: Core Advantages
TeleMem deeply refactors Mem0 to address characterization, long-term memory, and high performance. Key differences:
| Capability Dimension | Mem0 | TeleMem |
|---|---|---|
| Multi-character separation | β Not supported | β Automatically creates independent memory profiles per character |
| Summary quality | Basic summarization | β Context-aware + character-focused prompts covering key entities, actions, and timestamps |
| Deduplication mechanism | Vector similarity filtering | β LLM-based semantic clustering: merges similar memories via LLM |
| Write performance | Streaming, single writes | β Batch flush + concurrency: 2β3Γ faster writes |
| Storage format | SQLite / vector DB | β FAISS + JSON metadata dual-write: fast retrieval + human-readable |
| Multimodal Capability | Single image to text only | β Video Multimodal Memory: Full video processing pipeline + ReAct multi-step reasoning QA |
Experimental Results
Dataset
We evaluate the ZH-4O Chinese long-character dialogue dataset constructed in the paper MOOM: Maintenance, Organization and Optimization of Memory in Ultra-Long Role-Playing Dialogues:
- Average dialogue length: 600 turns per conversation
- Scenarios: daily interactions, plot progression, evolving character relationships
Memory capability was assessed via QA benchmarks, e.g.:
{
"question": "What is Zhao Qi's nickname for Bai Yulan? A Xiaobai B Xiaoyu C Lanlan D Yuyu",
"answer": "A"
},
{
"question": "What is the relationship between Zhao Qi and Bai Yulan? A Classmates B Teacher and student C Enemies D Neighbors",
"answer": "B"
}Experimental Configuration
-
LLM: Qwen3-8B (thinking mode disabled)
-
Embedding model: Qwen3-Embedding-8B
-
Metric: QA accuracy
Method Overall(%) RAG 62.45 Mem0 70.20 MOOM 72.60 A-mem 73.78 Memobase 76.78 TeleMem 86.33
Quick Start
Environment Preparation
# Create and activate virtual environment conda create -n telemem python=3.10 conda activate telemem # Install dependencies pip install -e .
Example
Set your OpenAI API key:
export OPENAI_API_KEY="your-openai-api-key"
# python examples/quickstart.py import telemem as mem0 memory = mem0.Memory() messages = [ {"role": "user", "content": "Jordan, did you take the subway to work again today?"}, {"role": "assistant", "content": "Yes, James. The subway is much faster than driving. I leave at 7 o'clock and it's just not crowded."}, {"role": "user", "content": "Jordan, I want to try taking the subway too. Can you tell me which station is closest?"}, {"role": "assistant", "content": "Of course, James. You take Line 2 to Civic Center Station, exit from Exit A, and walk 5 minutes to the company."} ] memory.add(messages=messages, user_id="Jordan") results = memory.search("What transportation did Jordan use to go to work today?", user_id="Jordan") print(results)
By default Memory() wires up:
- OpenAI gpt-4.1-nano-2025-04-14 for summary extraction and updates
- OpenAI text-embedding-3-small embeddings (1536 dimensions)
- Faiss vector store with on-disk data
If you want to customize the configuration, please modify config/config.yaml.
Project Structure
Expand/Collapse Directory Structure
telemem/
βββ assets/ # Documentation assets and figures
βββ vendor/
β βββ mem0/ # Upstream repository source code
βββ overlay/
β βββ patches/ # TeleMem custom patch files (.patch)
βββ scripts/ # Overlay management scripts
β βββ init_upstream.sh # Initialize upstream subtree
β βββ update_upstream.sh # Sync upstream and reapply patches
β βββ record_patch.sh # Record local modifications as patches
β βββ apply_patches.sh # Apply patches
βββ baselines/ # Baseline implementations for comparative evaluation
β βββ RAG # Retrieval-Augmented Generation baseline
β βββ MemoBase # MemoBase memory management system
β βββ MOOM # MOOM dual-branch narrative memory framework
β βββ A-mem # A-mem agent memory baseline
β βββ Mem0 # Mem0 baseline implementation
βββ config/
| βββ config.yaml # TeleMem configuration
βββ data/ # Small sample datasets for evaluation or demonstration
βββ examples/ # Code examples and tutorial demos
β βββ quickstart.py # Quick start
β βββ quickstart_mm.py # Quick start(Multimodel)
βββ docs/
β βββ TeleMem-Overlay.md # Overlay development guide (English)
β βββ TeleMem-Overlay-ZH.md # Overlay development guide (Chinese)
βββ telemem/ # Telemem code
βββ tests/ # Telemem test
βββ PATCHES.md # Patch list and descriptions
βββ README.md # English README
βββ README-ZH.md # Chinese README
βββ pyproject.toml # Python environment
Core Functions
Add Memory (add)
The add() method injects one or more dialogue turns into the memory system.
def add( self, messages, *, user_id: Optional[str] = None, agent_id: Optional[str] = None, run_id: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None, infer: bool = True, memory_type: Optional[str] = None, prompt: Optional[str] = None, )
π Parameter Description
| Parameter | Type | Required | Description |
|---|---|---|---|
messages |
List[Dict[str, str]] |
β Yes | List of dialogue messages, each with role (user/assistant) and content |
metadata |
Dict[str, Any] |
β Yes | Must include: γ» sample_id: unique session ID γ» user: list of character names |
user_id / agent_id / run_id |
Optional[str] | β No | Mem0-compatible parameters (ignored in TeleMem) |
infer |
bool |
β No | Whether to auto-generate memory summaries (default: True) |
memory_type |
Optional[str] | β No | Memory category (auto-classified if omitted) |
prompt |
Optional[str] | β No | Custom prompt for summarization (uses optimized default if omitted) |
π Internal Workflow of add()
- Message preprocessing: Merge consecutive messages from the same speaker; normalize turn structure.
- Multi-perspective summarization:
- Global event summary
- Character 1βs perspective (actions, preferences, relationships)
- Character 2βs perspective
- Vectorization & similarity search: Generate embeddings and retrieve existing similar memories.
- Batch processing: When buffer threshold is reached, invoke LLM to semantically merge similar memories.
- Persistence: Dual-write to FAISS (for retrieval) and JSON (for metadata).
Search Memory (search)
Performs semantic vector-based retrieval of relevant memories with context-aware recall.
def search( self, query: str, *, user_id: Optional[str] = None, agent_id: Optional[str] = None, run_id: Optional[str] = None, limit: int = 5, filters: Optional[Dict[str, Any]] = None, threshold: Optional[float] = None, rerank: bool = True, )
π Parameter Description
| Parameter | Type | Required | Description |
|---|---|---|---|
query |
str |
β Yes | Natural language query |
run_id |
str |
β Yes | Session ID (must match sample_id used in add) |
limit |
int |
β No | Max number of results (default: 5) |
threshold |
float |
β No | Similarity threshold (0β1; auto-tuned if omitted) |
filters |
Dict[str, Any] |
β No | Custom filters (e.g., by character, time range) |
rerank |
bool |
β No | Whether to rerank results (default: True) |
user_id / agent_id |
Optional[str] | β No | Mem0-compatible (no effect in TeleMem) |
π Search is based on FAISS vector retrieval, supporting millisecond-level responses.
Multimodal Extensions
Beyond text memory, TeleMem further extends multimodal capabilities. Drawing inspiration from Deep Video Discovery's Agentic Search and Tool Use approach, we implemented two core methods in the TeleMemory class to support intelligent storage and semantic retrieval of video content.
| Method | Description |
|---|---|
add_mm() |
Process video into retrievable memory (frame extraction β caption generation β vector database) |
search_mm() |
Query video content using natural language, supporting ReAct-style multi-step reasoning |
Add Multimodal Memory (add_mm)
def add_mm( self, video_path: str, *, frames_root: str = "video/frames", captions_root: str = "video/captions", vdb_root: str = "video/vdb", clip_secs: int = None, emb_dim: int = None, subtitle_path: str | None = None, )
π Parameter Description
| Parameter | Type | Required | Description |
|---|---|---|---|
| video_path | str | β Yes | Source video file path, e.g., "video/3EQLFHRHpag.mp4" |
| frames_root | str | β No | Frame output root directory (default "video/frames") |
| captions_root | str | β No | Caption JSON output root directory (default "video/captions") |
| vdb_root | str | β No | Vector database output root directory (default "video/vdb") |
| clip_secs | int | β No | Seconds per clip, overrides config.CLIP_SECS |
| emb_dim | int | β No | Embedding dimension, reads from config by default |
| subtitle_path | str | β No | Subtitle file path (.srt), optional |
π add_mm() Internal Flow
- Frame Extraction:
decode_video_to_frames- Decodes video to JPEG frames at configured FPS - Caption Generation:
process_video- Uses VLM (e.g., Qwen3-Omni) to generate detailed descriptions for each clip - Vector Database Construction:
init_single_video_db- Generates embeddings for semantic retrieval
π‘ Smart Caching: If the target file for a stage already exists, that stage is automatically skipped to save computational resources.
Return Value Example
{
"video_name": "3EQLFHRHpag",
"frames_dir": "video/frames/3EQLFHRHpag/frames",
"caption_json": "video/captions/3EQLFHRHpag/captions.json",
"vdb_json": "video/vdb/3EQLFHRHpag/3EQLFHRHpag_vdb.json"
}Search Multimodal Memory (search_mm)
def search_mm( self, question: str, video_db_path: str = "video/vdb/3EQLFHRHpag_vdb.json", video_caption_path: str = "video/captions/captions.json", max_iterations: int = 15, )
π Parameter Description
| Parameter | Type | Required | Description |
|---|---|---|---|
| question | str | β Yes | Question string (supports A/B/C/D multiple choice format) |
| video_db_path | str | β No | Video vector database path |
| video_caption_path | str | β No | Video caption JSON path |
| max_iterations | int | β No | Maximum MMCoreAgent reasoning iterations (default 15) |
π οΈ ReAct-Style Reasoning Tools
search_mm internally uses MMCoreAgent, employing a THINK β ACTION β OBSERVATION loop with three specialized tools:
| Tool Name | Function |
|---|---|
global_browse_tool |
Get global overview of video events and themes |
clip_search_tool |
Search for specific content using semantic queries |
frame_inspect_tool |
Inspect frame details within a specific time range |
Multimodal Example
Run the multimodal demo:
python examples/quickstart_mm.py
On the first run, all frames, captions and VDB JSON will be generated under output_dir (default data/samples/video/). For reproducibility, we also ship these intermediates in the repo so you can run queries directly without recomputing.
Complete code example:
import telemem as mem0 import os # Initialize memory = mem0.Memory() # Define paths video_path = "data/samples/video/3EQLFHRHpag.mp4" video_name = os.path.splitext(os.path.basename(video_path))[0] output_dir = "data/samples/video" os.makedirs(output_dir, exist_ok=True) # Step 1: Add video to memory (auto-processing) vdb_json_path = f"{output_dir}/vdb/{video_name}/{video_name}_vdb.json" if not os.path.exists(vdb_json_path): result = memory.add_mm( video_path=video_path, output_dir=output_dir, ) print(f"Video processing complete: {result}") else: print(f"VDB already exists: {vdb_json_path}") # Step 2: Query video content question = """The problems people encounter in the video are caused by what? (A) Catastrophic weather. (B) Global warming. (C) Financial crisis. (D) Oil crisis. """ messages = memory.search_mm( question=question, output_dir=output_dir, max_iterations=15, ) # Extract final answer from core import extract_choice_from_msg answer = extract_choice_from_msg(messages) print(f"Answer: ({answer})")
Data Storage
Text Memory Storage
TeleMem automatically creates a structured storage layout under ./faiss_db/, organized by session and character:
faiss_db/
βββ session_001_events.index
βββ session_001_events_meta.json
βββ session_001_person_1.index
βββ session_001_person_1_meta.json
βββ session_001_person_2.index
βββ session_001_person_2_meta.json
π Metadata Example (_meta.json)
{
"summary": "Characters discussed the upcoming action plan.",
"sample_id": "session_001",
"round_index": 3,
"timestamp": "2024-01-01T00:00:00Z",
"user": "Jordan" // Only present in person_*.json
}All memories include summary, round number, timestamp, and character, facilitating auditing and debugging.
Multimodal Memory Storage
TeleMem generates video-related storage files in the .data/samples/video/ directory:
video/
βββ frames/
β βββ <video_name>/
β βββ frames/
β βββ frame_000001_n0.00.jpg
β βββ frame_000002_n0.50.jpg
β βββ ...
βββ captions/
β βββ <video_name>/
β βββ captions.json # Clip descriptions + subject registry
β βββ ckpt/ # Checkpoint for resume
β βββ 0_10.json
β βββ 10_20.json
βββ vdb/
βββ <video_name>/
βββ <video_name>_vdb.json # Semantic retrieval vector database
π captions.json Structure
{
"0_10": {
"caption": "The narrator discusses climate data, showing melting glaciers..."
},
"10_20": {
"caption": "Scene shifts to coastal communities affected by rising sea levels..."
},
"subject_registry": {
"narrator": {
"name": "narrator",
"appearance": ["professional attire"],
"identity": ["climate scientist"],
"first_seen": "00:00:00"
}
}
}Development and Contribution
- Overlay development process: TeleMem-Overlay.md
- Chinese documentation: README-ZH.md
License
Acknowledgements
TeleMemβs development has been deeply inspired by open-source communities and cutting-edge research. We extend our sincere gratitude to the following projects and teams:
Star History
If you find this project helpful, please give us a βοΈ.
Made with β€οΈ by the Ubiquitous AGI team at TeleAI.

