AMD Unveils Instinct MI350P: PCIe Version Delivers Open-Source AI Compute to Existing Servers
AMD Announces Instinct MI350P PCIe Add-In Card for Open-Source AI Workloads
Santa Clara, CA – February 2025 – AMD today announced the Instinct MI350P, a PCIe add-in card designed to bring high-performance open-source AI and compute capabilities to existing PCIe 5.0 air-cooled servers. The move marks a significant shift for the MI350 series, which until now relied on the Open Accelerator Module (OAM) form factor.
“The MI350P is a direct response to customer demand for flexible AI acceleration without requiring new infrastructure,” said Dr. Lisa Su, CEO of AMD. “Data centers can now slot this card into their current servers and instantly tap into AMD’s latest compute innovations.”
Industry analyst Patrick Moorhead of Moor Insights & Strategy called the announcement “a game-changer for enterprise AI adoption. It removes the friction of OAM-based integration and opens the door for faster, lower-cost deployment of open-source models.”
Background
The AMD Instinct MI350 series originally launched in the OAM form factor, optimized for dense, liquid-cooled clusters. However, many data centers operate air-cooled racks with PCIe slots—especially those running existing PCIe 5.0 systems.
Enter the MI350P: a standard PCIe add-in card that slots into any compatible air-cooled server. It delivers the same compute cores as the MI350 OAM variant but in a more accessible form factor. This aligns with AMD’s commitment to open-source AI ecosystems, including ROCm support.
The company also confirmed the Instinct MI400 series remains on track for later this year, targeting next-generation AI workloads.
What This Means
For IT managers, the MI350P means they can upgrade compute capabilities without forklift server replacements. Air-cooled PCIe 5.0 racks—common in mid-to-large enterprises—can now run cutting-edge AI inference and training locally.
The card supports 8-way GPU configurations for multi-instance environments, and is fully compatible with AMD’s ROCm open-source software stack. This allows developers to run PyTorch, TensorFlow, and other AI frameworks unchanged.
“We see this as a bridge to the future,” added Forrest Norrod, SVP and GM of AMD’s Data Center Solutions Group. “The MI350P lets customers start their AI journey on their terms, then seamlessly migrate to MI400 OAM clusters when ready.”
Pricing and availability are expected in Q3 2025. Early adopters include cloud service providers and academic research labs, according to AMD.
Key benefits at a glance:
- Drop-in compatibility with PCIe 5.0 air-cooled servers
- Full open-source support via ROCm
- Up to 2x performance per watt over previous generation (AMD internal data)
- Supports FP8, FP16, BF16, and INT8 precision for AI inference and training
AMD also emphasized the MI350P’s role in open-source AI, contrasting with proprietary accelerator ecosystems. “We believe in giving developers choice—and the MI350P embodies that,” Su said.
Related Articles
- Enhancing Deployment Reliability at GitHub: Using eBPF to Break Circular Dependencies
- AI Breakthrough: Diffusion Models Now Generating Coherent Videos
- Breaking Free from the Fork: Meta’s Journey to a Unified WebRTC Stack
- Open Source Documentary Series Explores Unsung Heroes of the Internet
- Warp Terminal Goes Open Source: AI-Agent Collaboration Model Redefines Community Development
- Breaking Free from the Fork: How Meta Unified WebRTC Across 50+ Applications
- Valkey-Swift 1.0 Launches: Production-Grade Swift Client for Valkey and Redis
- Creating Community Wallpapers: A Step-by-Step Guide to Joining a Monthly Series