An Associated Press report, carried by outlets including The Independent and Halifax’s CityNews, warns that extremist organizations such as the Islamic State group are beginning to use generative AI for recruitment, deepfake propaganda and cyber operations, even if their most ambitious plans remain ‘aspirational’ for now. Analysts say AI image and video tools let small, under‑resourced groups pump out emotionally charged fake content at scale, which can be amplified by social‑media algorithms to radicalize supporters and obscure real‑world atrocities. The story also flags worries that AI could help militants develop biological or chemical weapons by lowering technical barriers, a risk now explicitly mentioned in US homeland security threat assessments. US lawmakers are debating legislation to require annual reviews of AI‑enabled extremist threats and to make it easier for AI companies to share data on abuse patterns, underscoring how AI safety is increasingly merging with counter‑terrorism and cyber‑defense policy debates.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

.jpg)
