Amin Vahdat, Google’s chief technologist for AI infrastructure, described rapid progress in the company’s Gemini model family ...
The Maia 200 deployment demonstrates that custom silicon has matured from experimental capability to production ...
Hyperscaler leverages a two-tier Ethernet-based topology, custom AI Transport Layer & software tools to deliver a tightly integrated, low-latency platform ...
Some space industry leaders — including Elon Musk and Jeff Bezos — believe the answer could be in orbit, via space-based data centers. That idea has gained attention in recent weeks. Musk is ...
New research from Dell’Oro Group shows how AI is impacting the entire data center infrastructure stack, from record ...
Microsoft launches Maia 200, promising faster AI without cutting ties to longtime chip suppliers or external hardware sources ...
Rohit Mittal, a veteran of Google Cloud's custom TPUs and Intel's silicon photonics, joins Nvidia's NVLink Fusion team ...
Microsoft is announcing a successor to its first in-house AI chip today, the Maia 200. Built on TSMC’s 3nm process, Microsoft says its Maia 200 AI accelerator “delivers 3 times the FP4 performance of ...
The two tech giants remain the most balanced plays in the booming AI market.
, the Maia 200 packs 140+ billion transistors, 216 GB of HBM3E, and a massive 272 MB of on-chip SRAM to tackle the efficiency ...
As retailers continue to struggle with fragmented CRM, CDP and marketing systems, Socialhub.AI positions its AI-native CIP as ...
Calling it the highest performance chip of any custom cloud accelerator, the company says Maia is optimized for AI inference on multiple models.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results