Back to Frontiers

Memory & Continual Learning

Maturing4%

Long-context understanding, persistent memory, RAG systems, and lifelong learning. Giving AI the ability to remember and learn continuously.

long-contextRAGepisodic-memorycontinual-learningretrievalmemory
35
Papers
4
Milestones
$0
Funding
2
Benchmarks

Key Benchmarks

RULER

Long-context benchmark testing retrieval and reasoning over 128K+ tokens

87.1%Human: 98%
Leader: Phi-3.5-MoE-instructmedium saturation

InfiniteBench

Ultra-long context benchmark testing 100K+ token understanding

60.39%Human: 95%
Leader: Qwen2-70B x MRlow saturation

Recent Papers

Recent Milestones

Gemini 1.5 Pro 2M Context

Google expands Gemini 1.5 Pro to 2 million token context window.

May 14, 2024benchmarkImpact: 88/100

GPT-4 Turbo 128K Context

OpenAI expands GPT-4 Turbo to 128K tokens with improved retrieval over long documents.

Apr 9, 2024releaseImpact: 78/100

Claude 3 200K Context

Anthropic releases Claude 3 with 200K context window and improved long-context performance.

Mar 4, 2024releaseImpact: 85/100

Gemini 1.5 Pro 1M Context Window

Google releases Gemini 1.5 Pro with 1 million token context window, 10x previous limits.

Feb 15, 2024releaseImpact: 92/100

Leading Organizations

OpenAI
Anthropic
Google
Meta
Cohere

ArXiv Categories

cs.LGcs.AIcs.CLcs.IR

Related Frontiers