As major players like Meta, Microsoft, and Apple engage in trading electricity to power AI data centers, a strategic trend emerges where tech companies are diversifying into energy markets to secure their computational needs. This shift indicates a significant evolution in how AI infrastructure is managed, suggesting a future where energy procurement becomes as critical as hardware acquisition, potentially disrupting traditional energy sectors and creating new opportunities for energy innovation.


Nvidia CEO Jensen Huang met with President Trump and Republican senators on Capitol Hill to discuss export controls on advanced AI chips, saying he supports federal export control policy but warning against state-by-state AI regulation, which he called a national security risk. Coverage in U.S. and Indian outlets highlights Huang’s push for a single federal AI standard that preserves U.S. competitiveness while allowing Nvidia to continue selling powerful AI processors globally, including to China, under controlled conditions.
Nvidia released benchmark data showing its latest AI server, which packs 72 of its top chips into a single system, can deliver roughly a 10x performance gain when serving large mixture‑of‑experts models such as Moonshot AI’s Kimi K2 Thinking and DeepSeek’s models. The results aim to show that even as some new models train more efficiently, Nvidia’s high‑end servers remain critical for large‑scale inference, reinforcing its dominance against rivals like AMD and Cerebras in the AI deployment market.
Reuters reports that an acute global shortage of memory chips is emerging as tech giants race to build AI data centers, diverting capacity into high-bandwidth memory for GPUs and away from traditional DRAM and flash used in consumer devices. Major AI players including Microsoft, Google, ByteDance, OpenAI, Amazon, Meta, Alibaba and Tencent are scrambling to secure supply from Samsung, SK Hynix and Micron, with SK Hynix warning the shortfall could last through late 2027, potentially delaying AI infrastructure projects and adding inflationary pressure worldwide.
Silicon Valley startup Vinci has come out of stealth with a physics‑driven AI platform that it claims can run chip and hardware simulations up to 1,000x faster than traditional finite element analysis tools, without training on customer data. The company disclosed $46 million in total seed and Series A funding led by Xora Innovation and Eclipse, with backing from Khosla Ventures, to expand deployments at leading semiconductor manufacturers.
Santa Clara–based d-Matrix closed a $275 million Series C round at a $2 billion valuation to expand its full-stack AI inference platform, which combines Corsair accelerators, JetStream networking and Aviator software for large language model serving. The oversubscribed round, led by a global consortium including BullhoundCapital, Triatomic Capital and Temasek with participation from QIA, EDBI and Microsoft’s M12, will fund global deployments and roadmap advances such as 3D memory stacking to deliver up to 10× faster, more energy‑efficient inference than GPU-based systems. ([theaiinsider.tech](https://theaiinsider.tech/2025/11/29/d-matrix-announces-275m-in-funding-to-power-the-age-of-ai-inference/))

Major Chinese firms including Alibaba and ByteDance are training their latest large language models in Southeast Asian data centers to access Nvidia chips and navigate U.S. export restrictions, according to the Financial Times reporting cited by Reuters. DeepSeek is cited as an exception, training domestically, while Huawei is said to be collaborating on next‑gen Chinese AI chips. ([reuters.com](https://www.reuters.com/world/china/chinas-tech-giants-move-ai-model-training-overseas-tap-nvidia-chips-ft-reports-2025-11-27/))

Meta is negotiating a multi‑year deal to deploy Google’s Tensor Processing Units (TPUs) in its data centers starting in 2027 and may rent TPUs via Google Cloud as early as 2026. If finalized, the move would diversify Meta’s AI hardware beyond Nvidia and bolster Google’s push to commercialize TPUs, reshaping competitive dynamics in AI compute.

Alphabet shares climbed toward a $4 trillion valuation, driven by investor confidence in Google’s AI roadmap, including Gemini and its in‑house AI chips. The rally underscores how expectations around AI products and custom silicon are increasingly shaping megacap tech valuations.
Alphabet shares rose and Nvidia slipped after reports that Meta is discussing multi‑billion‑dollar purchases of Google’s Tensor Processing Unit (TPU) chips for data centers starting in 2027, with TPU rentals as early as 2026. If finalized, the deal would mark a strategic win for Google’s AI hardware and intensify competition in the AI accelerator market.

The Trump administration is considering allowing Nvidia to sell its H200 AI accelerators to China, signaling a potential softening of export curbs after a recent trade-tech truce. Any approval would reopen a major market for Nvidia while intensifying debate over U.S. national security and AI leadership.

Meta is seeking federal approval to trade wholesale electricity so it can underwrite long-term power from new plants for its AI data centers and resell excess on wholesale markets. The move, which Microsoft is also pursuing and Apple already has approval for, highlights how the AI compute boom is pushing big tech deeper into energy markets.