Memory & Continual Learning
Long-context understanding, persistent memory, RAG systems, and lifelong learning. Giving AI the ability to remember and learn continuously.
Key Benchmarks
Recent Papers
EuroLLM-22B: Technical Report
Miguel Moura Ramos, Duarte M. Alves, Hippolyte Gisserot-Boukhlef +15 more
Shared LoRA Subspaces for almost Strict Continual Learning
Prakhar Kaushik, Ankit Vaidya, Shravan Chaudhari +2 more
KV-CoRE: Benchmarking Data-Dependent Low-Rank Compressibility of KV-Caches in LLMs
Jian Chen, Zhuoran Wang, Jiayu Qin +6 more
MemSkill: Learning and Evolving Memory Skills for Self-Evolving Agents
Haozhen Zhang, Quanyu Long, Jianzhu Bao +4 more
RRAttention: Dynamic Block Sparse Attention via Per-Head Round-Robin Shifts for Long-Context Inference
Siran Liu, Guoxia Wang, Sa Wang +7 more
ORBITFLOW: SLO-Aware Long-Context LLM Serving with Fine-Grained KV Cache Reconfiguration
Xinyue Ma, Heelim Hong, Taegeon Um +4 more
Explore with Long-term Memory: A Benchmark and Multimodal LLM-based Reinforcement Learning Framework for Embodied Exploration
Sen Wang, Bangwei Liu, Zhenkun Gao +4 more
Toward Ultra-Long-Horizon Agentic Science: Cognitive Accumulation for Machine Learning Engineering
Xinyu Zhu, Yuzhu Cai, Zexi Liu +11 more
Over-Searching in Search-Augmented Large Language Models
Roy Xie, Deepak Gopinath, David Qiu +4 more
Chaining the Evidence: Robust Reinforcement Learning for Deep Search Agents with Citation-Aware Rubric Rewards
Jiajie Zhang, Xin Lv, Ling Feng +2 more
Recent Milestones
Gemini 1.5 Pro 2M Context
Google expands Gemini 1.5 Pro to 2 million token context window.
GPT-4 Turbo 128K Context
OpenAI expands GPT-4 Turbo to 128K tokens with improved retrieval over long documents.
Claude 3 200K Context
Anthropic releases Claude 3 with 200K context window and improved long-context performance.
Gemini 1.5 Pro 1M Context Window
Google releases Gemini 1.5 Pro with 1 million token context window, 10x previous limits.