The trend towards stricter ethical guidelines and human oversight in AI usage within journalism reflects a growing recognition of the risks associated with automated content generation. This shift signals a proactive approach to safeguarding information integrity and promoting responsible AI usage, benefiting both media professionals and audiences alike while potentially disrupting reliance on fully automated news production. As organizations emphasize transparency and ethical practices, the industry is moving towards a model that prioritizes human judgment and accountability.


UNESCO reported that more than 100 journalists from across Paraguay have completed a program focused on the ethical use of artificial intelligence in journalism, including modules on AI‑generated content verification, data protection, algorithmic bias and newsroom best practices. The initiative, led by local human‑rights and journalist organizations with support from UNESCO’s International Programme for the Development of Communication, produced a Good Practice Guide on ethical AI use in journalism. The effort highlights growing concern in Latin America over AI’s impact on information integrity and aims to equip media workers to use AI tools critically while safeguarding human rights.

An opinion piece syndicated from China Daily and published by Pakistan’s Dawn argues that AI is a qualitatively different technology from past inventions because it rivals humans’ core knowledge‑producing abilities, creating deep risks to social stability, identity and economic structures. The article calls for stronger global governance frameworks, public awareness and ethical norms to ensure AI systems remain under meaningful human control and are steered toward broadly beneficial outcomes rather than concentrated power or military escalation. ([dawn.com](https://www.dawn.com/news/1959887/how-humans-can-control-risks-arising-from-ai))
CBC News has updated its internal guidelines on the use of AI, emphasizing that artificial intelligence is a tool, not the creator, of published content. The policy allows AI for assistive tasks like data analysis, drafting suggestions and accessibility services, but bans AI from writing full articles or creating public-facing images or videos, and requires explicit disclosure to audiences when AI plays a significant role in a story.