The ongoing geopolitical tensions and regulatory uncertainties are driving a shift towards localized AI infrastructure, as regions like Malaysia seek to build their own AI compute capabilities using leading technologies like Nvidia's GPUs. This trend signals a move away from reliance on U.S. hyperscalers, creating a more diversified and competitive landscape for AI resources that could empower emerging markets while challenging established players. As companies seek to secure AI capabilities amidst export controls and compliance risks, the balance of power in the AI ecosystem is poised for significant reconfiguration.

Nvidia has acquired SchedMD, the company behind the open‑source Slurm workload scheduler used to manage large high‑performance computing and AI clusters. Financial terms were not disclosed, but Nvidia said it will continue distributing Slurm as open source while selling enterprise support, effectively pulling a critical piece of data‑center orchestration into its own portfolio. The deal underscores how much of Nvidia’s AI power comes not just from GPUs but from the surrounding software—CUDA, libraries, and now core scheduling infrastructure—that locks customers into its ecosystem. By owning Slurm, Nvidia can better optimize large training and inference jobs for its hardware, while making it harder for rival chip vendors to compete on equal footing in complex multi‑GPU environments. For AI customers, the upside is potentially smoother scaling and support, but it also concentrates even more leverage in Nvidia’s hands at a time when regulators and hyperscalers are already wary of its dominance.

GIBO Holdings has announced a strategic collaboration with Malaysia‑based E Total Technology to plan and deploy a network of AI compute centers in Malaysia built around NVIDIA’s latest high‑performance GPU architectures. Under the framework, E Total will lead local execution—site selection, technical and commercial feasibility studies, regulatory coordination and data‑center infrastructure planning—while both parties evaluate future capacity expansions. The facilities are expected to support dense AI training and inference workloads for enterprises, research institutions and digital‑economy players, strengthening Malaysia’s aspiration to become a regional AI compute hub. Although financial terms weren’t disclosed, the move signals continued fragmentation of global AI infrastructure build‑out as mid‑tier players and emerging markets race to secure scarce top‑end NVIDIA capacity rather than relying solely on U.S. hyperscalers.([prnewswire.com](https://www.prnewswire.com/news-releases/gibo-announces-strategic-collaboration-with-e-total-technology-sdn-bhd-to-accelerate-deployment-of-ai-compute-centers-featuring-nvidias-most-advanced-chips-302642156.html))
U.S. Senator Elizabeth Warren called for Nvidia CEO Jensen Huang and Commerce Secretary Howard Lutnick to testify regarding President Trump’s planned greenlight for sales of Nvidia’s H200 AI chip to China. The request highlights ongoing policy volatility around export controls for advanced AI hardware and the national-security debate over compute access. For Nvidia and the broader semiconductor ecosystem, such political moves can quickly reshape revenue outlooks, customer allocation strategies, and compliance risk. For AI developers and cloud providers, the policy direction influences where frontier training and inference capacity can be deployed—and which regions face structural compute constraints.
Strategic collaboration announced to deploy AI compute centers, indicating significant business development.
Acquisition of SchedMD strengthens Nvidia's position in AI data-center software, indicating a major strategic move.