Dialogue
Research papers, repositories, and articles about dialogue
Showing 2 of 2 items
SpeakRL: Synergizing Reasoning, Speaking, and Acting in Language Models with Reinforcement Learning
Argues that current task-oriented agents are over-optimized as passive followers and under-use conversation as an action. SpeakRL introduces a reinforcement-learning setup that rewards models for asking clarifying questions when the user’s intent is ambiguous, balancing ‘asking’ vs ‘acting’. On synthetic task-oriented dialogue scenarios, the trained agents substantially improve task completion rates without bloating the number of turns, suggesting that proactive clarification is a powerful, underused control knob.
MAC: A Multi-Agent Framework for Interactive User Clarification in Multi-turn Conversations
Proposes a multi‑agent architecture where specialized conversational agents coordinate to decide when and how to ask clarification questions in ambiguous multi‑turn tasks. Instead of a monolithic assistant, MAC assigns roles and coordination rules so that the ‘right’ agent takes the lead on resolving uncertainty. This is a nice complement to SpeakRL: one focuses on *whether* to clarify, the other on *who* clarifies and how to coordinate in complex workflows.