Microsoft announced the first AI chip Maia 100

Microsoft announced the first AI chip Maia 100

16:00, 30.08.2024

At the Ignite 2023 conference Microsoft for the first time talked about developing their own AI accelerator chip under the name Maia, sharing the specifications of Maia 100 just before the event. Maia 100 is one of the biggest 5nm node TSMC processors and is specifically developed for high workloads in Azure.


Maia 100 has the following features:

  • chip size - 820 mm2;
  • package - TSMC N5 process with COWOS-S interposer technology;
  • HBM BW/Cap - 1.8 TB/s @ 64 GB HBM2E;
  • Peak Dense Tensor POPS - 6 bits: 3, 9 bits: 1.5, BF16: 0.8;
  • L1/L2 - 500 MB;
  • Backend Network BW - 600 GB/s (12X400 GB);
  • Host BW (PCIe) = 32GB/s PCIe Gen5X8;
  • TDP requirements - 700W;
  • TDP - 500W.


Microsoft Maia 100 features vertical integration for cost and performance optimization, as well as customized server boards with specially designed racks and a software stack for enhanced performance.


SoC Maia 100 has the following architecture:

  • High-speed tensor block for training and output processing with support for a wide range of data types 16xRx16.
  • Vector processor being a loosely coupled superscalar engine designed using an instruction set architecture (ISA) to support a wide range of data types including FP32 and BF16.
  • Direct Memory Access (DMA) supporting different tensor segmentation schemes.
  • Asynchronous programming provided by hardware semaphores.
  • L1 and L2 are managed by software for better data utilization and energy efficiency.


Maia 100 utilizes an Ethernet-based interconnect with a custom RoCE-type protocol for ultra-high bandwidth computing, supporting all-gather and scatter-reduced bandwidth up to 4800 Gbps and all-to-all bandwidth up to 1200 Gbps.


The Maia SDK enables quick porting of PyTorch and Triton models to Maia, with tools for easy deployment to Azure OpenAI Services. Developers can use either the Triton programming language for DNNs or the Maia API for optimized performance. The SDK also supports PyTorch models natively.

views 1m, 27s
views 2
Share

Was this article helpful to you?

VPS popular offers

Other articles on this topic

cookie

Accept cookies & privacy policy?

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the HostZealot website.