Howard University announced on Jan 15, 2026 a $4 million, four-year AI literacy initiative called ASCEND-AI in partnership with Bowie State University. Funded by a U.S. Department of Education grant, the program will train faculty and students to use AI responsibly and is expected to serve about 400 students and 50 faculty per year.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
While not about new models or chips, this grant goes to the heart of who will actually be prepared to work with near‑AGI systems. ASCEND‑AI aims to embed AI modules across existing courses at two historically Black institutions and eventually extend training to secondary‑school teachers. It explicitly foregrounds academic integrity, ethical use and critical evaluation of AI outputs, not just tool proficiency.
For the race to AGI, the composition of the talent pipeline matters as much as the raw speed of models. If advanced AI is built and governed by an unrepresentative slice of society, its blind spots and harms will be amplified. By directing federal funding toward AI literacy at HBCUs, this initiative helps diversify who gets to shape, critique and deploy these systems. It also treats AI literacy as a cross‑curricular competence, more like writing or statistics than a niche CS elective.
As frontier systems become more agentic and embedded in workflows, the difference between organisations that approach them critically and those that treat them as infallible black boxes will widen. Programs like ASCEND‑AI are early attempts to put critical capacity on the right side of that divide.