Federated Learning
Research papers, repositories, and articles about federated learning
Showing 2 of 2 items
Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents
Fed-SE is a federated learning framework for LLM agents that must improve across heterogeneous environments under strict privacy constraints. It combines local parameter-efficient fine-tuning on high-return trajectories with global aggregation in a low-rank subspace, reducing negative transfer and boosting average success rates by ~18% over federated baselines. ([huggingface.co](https://huggingface.co/papers/2512.08870))
Fed-SE: Federated Self-Evolution for Privacy-Constrained Multi-Environment LLM Agents
HF frames Fed-SE as a way to let LLM agents "self-evolve" across different clients and environments without sharing raw trajectories. For people deploying agents in regulated or siloed settings, it’s an interesting recipe for federated RL that reduces gradient conflicts across heterogeneous tasks. ([huggingface.co](https://huggingface.co/papers/2512.08870))