Scale Up with Technology Designed For HPC and AI on OCP Hardware
More room for high-value technology, lower total cost of ownership, more innovation
Don’t Compromise, Meet Technical and Business Priorities
Leading-edge organizations choose Open Compute Project (OCP)-based infrastructure so they can scale out cost-effectively. There is a strong argument for using OCP-based hardware in a data center: It is less expensive to buy and to maintain, reduces points of failure, is designed for more efficient power management, and significantly reduces security issues.
However, for teams trying to perform complex high-performance computing (HPC) or artificial intelligence (AI), OCP has been a challenge. Few vendors have the skills and experience to build server or rack designs that meet the complex, software-driven needs of HPC and AI. Among those that do, designs often do not address the power and cooling requirements of today’s fastest processors.
Fortunately, Penguin Computing™—an early adopter and supporter of OCP technology—has the solution: the Penguin Computing Tundra platform. The Tundra platform combines the capital and operating expense savings of OCP-based hardware with the latest processors for HPC and AI.
Tundra AP
Tundra AP, the latest generation of Penguin Computing’s highly dense Tundra supercomputing platform, combines the processing power of Intel® Xeon® Scalable 9200 series processors with Penguin’s Relion XO1122eAP server in an OCP form factor that delivers a high density of CPU cores per rack.
Organizations of all sizes and all levels of compute needs can benefit from the performance of Intel’s Scalable 9200 series processor in Penguin’s Tundra platform that delivers industry-leading density, improved power efficiency, improved serviceability, and room-neutral cooling via an integrated direct-to-chip liquid cooling solution.
The Tundra AP platform:
- Leverages power of Intel® Xeon® Scalable 9200 Series processors in an OCP form factor
- Delivers higher node density
- Enables greater power capacity
- Improves serviceability
- Provides room-neutral, integrated, direct-to-chip cooling
- Lowers total cost of ownership
Tundra ES for HPC
Thanks to two decades of experience in how software is orchestrated, deployed, managed, and optimized for different compute architectures, Penguin Computing was able to create a complete HPC system that is dense enough for the most challenging projects and flexible enough for virtually any HPC computing architecture—while also taking advantage of OCP’s inherent ease of maintenance and low total cost of ownership (TCO).
Now in its second generation, the Tundra platform:
- Supports an exceptionally diverse array of technologies, including graphics processing unit (GPU)-accelerated computing on the latest NVIDIA® graphics accelerators
- Includes server formats from 1OU-4OU with a capacity for over 100 nodes per rack
- Comes with the latest AMD EPYCTM processors or Intel® Xeon® Scalable processors, high-speed software-defined networking (SDN), and localized storage for flexibility and performance
If you’re interested in a hybrid HPC approach or need to enable remote access, Tundra technology is also available via the cloud through the Penguin Computing® On-DemandTM (PODTM) platform.
Learn More
- Tundra ES for HPC Datasheet
- OCP-based Servers for Tundra
- Open Bridge Rack Datasheet
- OCP Vendor Selection guide
- Customer Story – OCP-based HPC at Lawrence Livermore National Lab
Tundra ES for AI
To meet the increasing demand for AI training and inference, the Penguin Computing AI Practice has created a reference design for the Tundra ES platform that supports the latest developments in AI while taking advantage of OCP’s low TCO and allows massive scale up.
This production-quality design was informed by real life experience working with some of the largest AI clusters in the world about how AI frameworks are orchestrated, deployed and optimized for different compute architectures.
That’s why it was optimized to support the technologies required for inference workloads, is dense enough to allow significantly more high-value technology per rack than traditional solutions, and is suitable for a more diverse compute array of architectures than most OCP designs.
The Tundra platform:
- Supports an exceptionally diverse array of technologies, including the NVIDIA T4 with Turning Tensor Cores for inference
- Includes server formats from 1OU-4OU with a capacity for over 100 nodes per rack
- Comes with the latest AMD EPYCTM processors or Intel® Xeon® Scalable processors, high-speed software-defined networking (SDN), and localized storage for flexibility and performance
Learn More
Learn how principles of the Open Compute Project®️ (OCP) overcome current design limitations for computing hardware and software, and the benefits of OCP over EIA standards.
It will also detail how Penguin Solutions™ has HPC and AI solutions that align with OCP principles.