Description
Wave NF5688M5/M6 GPU Servers Wave Servers
Artificial Intelligence + AI Learning + GPU Server Wave NF5688M6
NF5688M6NF5688M6 is a new generation NVLink AI server developed by Wave for hyperscale data centers with high performance, high compatibility, and strong expansion, and it is the first to support two Intel's latest Ice Lake CPUs and eight NVIDIA's latest NVSwitch fully interconnected 500W A100 GPUs in 6U space. The NF5688M6 is the first air-cooled product in the industry to support 500W A100 GPUs, and at the same time, it can provide up to 12 PCIe expansion products, and support self-developed double-width N20X, NV DPU and other intelligent network cards. Combined with AIStation, the leading AI resource scheduling platform, it can fully unleash powerful AI computing performance of up to 5 petaFLOPS.
Functional characteristics
advanced technology
2 Intel Ice Lake processors on a 10nm processor
8 NVIDIA A100 GPUs, 600GB/s bandwidth NVSwitch Full Interconnect
supports Multi-Instance GPU (MIG), dramatically increasing GPU resource utilization.
Up to 10 200G HDR InfiniBand, high-speed interconnect expansion
Stable quality
supports hard/soft RAID solutions to ensure data security.
N+N redundant power supply to ensure reliable system operation
Optimized heat dissipation design supports stable operation under high ring temperature
Intelligent remote management for fast fault localization
optimal design
The industry's only air-cooled 500W A100 GPU.
High performance rationing with GPU:compute IB:storage IB=8:8:2
Modular design for flexible operation and easy O&M
Leading Supportbandwagon N20X, NV-DPU, A/T Customer Smart Card
Ecological excellence
Extensive and Mature x86+CUDA Global Development Ecosystem
Leading deep learning framework support, TensorFlow/PyTorch/Flying Paddle, etc.
Efficiently supports large-scale CV/NLP/NMT/DLRM model training and inference
Easily Connects with Metabrain Eco-Partners to Provide Rich Industry AI Solution Tech Specs
models | NF5688M6 |
high degree | 6U |
GPU computing module | 1* HGX A100 8GPU |
processing unit | 2 3rd Generation Intel® Xeon® Scalable Processors (Ice Lake) with 270W TDP and 3 UPI interconnect support |
chipsets | Intel® C621A Series Chipset (Lewisburg-R) |
random access memory (RAM) | Supports 32 sticks of DDR4 RDIMM/LRDIMM memory at up to 3200MT/s. |
stockpile | 8 x 2.5'' NVMe SSDs or 16 x 2.5'' SATA/SAS SSDs |
M.2 | 2 NVMe/SATA M.2 on board |
PCIe Expansion | 10 PCIe 4.0 x16 slots, 2 PCIe 4.0 x16 slots (PCIe 4.0 x8 rate) or 6 PCIe 4.0 x16 |
RAID Support | Optional support for RAID0/1/10/5/50/6/60, etc., support for Cache supercapacitor protection to provide RAID state migration, RAID configuration memory |
reticulation | Optional 1x PCIe 4.0 x16 OCP 3.0 NIC, rate support 10G/25G/100G |
Front I/O | 1 USB 3.0 port, 1 USB 2.0 port, 1 VGA port, 1 RJ45 management port |
Rear I/O | 1 USB 3.0 port, 1 VGA port |
remote management | Built-in BMC remote management module, supports Redfish/IPMI/SOL/KVM, etc. |
operating system | Red Hat Enterprise 7.8 64bit, CentOS 7.8, Ubuntu 18.04 or later |
radiator | N+1 redundant hot-swappable fans |
electric power source | Six 3000W 80Plus Platinum PSUs with 3+3 redundancy support |
Chassis Size | Width 447mm, Height 263.9mm, Depth 850mm |
operating temperature | 5℃~35℃/41℉~95℉ |
full complementary weight | ≤88kg |
Reviews
There are no reviews yet.