Back to Frontiers

Memory & Continual Learning

Maturing1%

Long-context understanding, persistent memory, RAG systems, and lifelong learning. Giving AI the ability to remember and learn continuously.

long-contextRAGepisodic-memorycontinual-learningretrievalmemory
21
Papers
4
Milestones
$0
Funding
2
Benchmarks

Key Benchmarks

RULER

Long-context benchmark testing retrieval and reasoning over 128K+ tokens

96%Human: 98%
Leader: Jamba-1.5-largehigh saturation

InfiniteBench

Ultra-long context benchmark testing 100K+ token understanding

63.3%Human: 95%
Leader: Llama 3.2 3B Instructlow saturation

Recent Papers

Recent Milestones

Gemini 1.5 Pro 2M Context

Google expands Gemini 1.5 Pro to 2 million token context window.

May 14, 2024benchmarkImpact: 88/100

GPT-4 Turbo 128K Context

OpenAI expands GPT-4 Turbo to 128K tokens with improved retrieval over long documents.

Apr 9, 2024releaseImpact: 78/100

Claude 3 200K Context

Anthropic releases Claude 3 with 200K context window and improved long-context performance.

Mar 4, 2024releaseImpact: 85/100

Gemini 1.5 Pro 1M Context Window

Google releases Gemini 1.5 Pro with 1 million token context window, 10x previous limits.

Feb 15, 2024releaseImpact: 92/100

Leading Organizations

OpenAI
Anthropic
Google
Meta
Cohere

ArXiv Categories

cs.LGcs.AIcs.CLcs.IR

Related Frontiers