
On December 20, 2025, Dominican outlet elDinero reported on Meta’s new SAM Audio model, which can isolate instruments, voices or noise from complex audio using text, visual or time-span prompts. The model is available for free testing in Meta’s Segment Anything Playground and can also be downloaded.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
SAM Audio is noteworthy because it extends the “segment anything” paradigm from vision into sound, and does so with a unified, prompt-driven model. Users can isolate a guitar, a barking dog or background traffic using natural language, click on a source in video, or mark a time span, and the model separates that component from a complex mix. ([eldinero.com.do](https://eldinero.com.do/347424/meta-facilita-la-edicion-de-musica-y-sonido-con-su-nuevo-modelo-sam-audio/))
For the AGI race, this is another step toward genuinely multimodal understanding. Being able to jointly reason over audio, text and visual cues is a prerequisite for systems that operate robustly in messy real-world environments—think domestic robots, AR assistants or surveillance‑sensitive public infrastructure. While SAM Audio is a specialist model rather than a general agent, it will feed high-quality, labeled multimodal data and architectures back into Meta’s broader model stack.
Strategically, Meta keeps leaning into open releases that are highly practical for creators and developers. That helps them recruit the ecosystem even if they lag some rivals on headline benchmark scores. Over time, a family of open, high‑performing specialist models for images, 3D and audio could make Meta a default toolkit for multimodal experimentation, which is exactly the substrate you’d like to own if and when someone stitches these capabilities into more general, agentic systems.
Meta signed multi‑year AI content licensing agreements with major news publishers so Meta AI can surface real‑time news results with attribution and links.
Meta acquired AI wearables startup Limitless, maker of a pendant-style device and the Rewind memory app, to accelerate its roadmap for AI-enabled consumer hardware.
Meta signed multi‑year content licensing agreements with major news publishers so Meta AI can answer news queries with real‑time information and links to their articles.
Strategic partnership to distribute Llama models on Azure AI


