Data Center NVIDIA

NVIDIA H100 SXM

Hopper Architecture · 80GB HBM3 · SXM

VRAM
80GB
FP16
134.0
TDP
700W
Hardware Price
$32k
MSRP: $30k
Cloud from
$2.10/hr
13 providers
Cheapest at Fluidstack →

Quick Insights

Performance/Dollar
4.19 TFLOPS/$k
FP16 performance per $1000
VRAM/Dollar
2.5 GB/$k
VRAM per $1000
vs Data Center Average
-25% perf
FP16 TFLOPS comparison
Cloud Availability
13 providers
from $2.10/hr

Specifications

VRAM 80GB HBM3
Memory Bandwidth 3.4 TB/s
FP16 TFLOPS 134.0
Tensor TFLOPS 2.0k
FP32 TFLOPS 67.0
TDP 700W
Form Factor SXM
Architecture Hopper
NVLink Yes (900GB/s)
Release Date 2022-09

Buy vs Rent Analysis

Buy Hardware
$32k
  • One-time cost, unlimited usage
  • Full control over hardware
  • Electricity & cooling costs extra
  • Depreciation over 2-3 years
Best if using >15238 hours total
Rent Cloud GPU
$2.10/hr
  • Pay only for what you use
  • No upfront investment
  • Scale up/down instantly
  • No maintenance required
Best for <15238 hours or variable usage
Breakeven Point
15,238
hours of usage

At $2.10/hr cloud pricing, buying the hardware pays off after 15,238 hours (~635 days or 21.2 months of 24/7 usage).

Usage Monthly Cloud Cost Months to Breakeven
100 hrs/month $210.00 153 months
200 hrs/month $420.00 77 months
500 hrs/month $1.1k 31 months

Cloud GPU Pricing

Rent NVIDIA H100 SXM from 13 cloud providers. Prices shown per GPU per hour.

Provider Type Instance GPUs On-Demand Per GPU Spot Availability
Fluidstack gpu-cloud fluidstack-h100-sxm 1x $2.10/hr $2.10/hr Cheapest - -
Genesis Cloud gpu-cloud genesis-h100-sxm 1x $2.19/hr $2.19/hr - -
Paperspace gpu-cloud paperspace-h100-sxm 1x $2.24/hr $2.24/hr - -
TensorDock marketplace tensordock-h100-sxm 1x $2.25/hr $2.25/hr - -
Datacrunch (Verda) gpu-cloud datacrunch-h100-sxm 1x $2.29/hr $2.29/hr - -
RunPod gpu-cloud NVIDIA H100 80GB HBM3 1x $2.69/hr $2.69/hr $1.75/hr (-35%) -
Jarvis Labs gpu-cloud jarvislabs-h100-sxm 1x $2.99/hr $2.99/hr - -
Lambda Labs gpu-cloud lambda-h100-sxm 1x $2.99/hr $2.99/hr - -
CoreWeave gpu-cloud coreweave-h100-sxm 1x $6.16/hr $6.16/hr - -
Google Cloud Platform hyperscaler gcp-h100 1x $9.80/hr $9.80/hr $2.25/hr (-77%) -
Google Cloud Platform hyperscaler gcp-h100_mega 1x $10.34/hr $10.34/hr $2.38/hr (-77%) -
Oracle Cloud hyperscaler oracle-gpu-h100 1x $10.75/hr $10.75/hr - -
Microsoft Azure hyperscaler Standard_ND96isr_H100_v5 8x $98.32/hr $12.29/hr - -
Best Spot Deal: Google Cloud Platform offers spot pricing at $2.25/hr (77% off on-demand).

H100 SXM vs Alternatives

Compare NVIDIA H100 SXM with similar GPUs from other brands.

GPU VRAM FP16 TFLOPS Bandwidth Hardware Price Cloud Price
H100 SXM Current 80GB 134.0 3.4 TB/s $32k - -
AMD Instinct MI210 AMD 64GB (-20%) 181.0 (+35%) 1.6 TB/s - - Compare
MI300 AMD 128GB (+60%) 490.3 (+266%) 5.3 TB/s $15k (-53%) - Compare
AMD Instinct MI300A AMD 128GB (+60%) 980.0 (+631%) 5.3 TB/s - - Compare
AMD Instinct MI250X AMD 128GB (+60%) 383.0 (+186%) 3.3 TB/s - - Compare
AMD Instinct MI250 AMD 128GB (+60%) 362.0 (+170%) 3.3 TB/s - - Compare

Frequently Asked Questions about H100 SXM

The NVIDIA H100 SXM has a market price of approximately $32k (MSRP: $30k). Cloud rental starts at $2.10/hr. Prices may vary based on retailer, region, and availability.

Yes, the NVIDIA H100 SXM with 80GB VRAM is suitable for many AI/ML workloads. It's particularly recommended for LLM Training. For large language models, you may need multiple GPUs or consider higher-VRAM options like A100 or H100.

The breakeven point is approximately 15,238 hours of usage. Buy if you'll use it more than this; rent for shorter projects or variable workloads. Cloud rental from Fluidstack starts at $2.10/hr.

With 80GB VRAM and 134.0 FP16 TFLOPS, the NVIDIA H100 SXM can run: Large language models (7B-13B), Stable Diffusion XL, video AI, and professional 3D rendering.

The NVIDIA H100 SXM offers 80GB VRAM and 134.0 FP16 performance at $32k. Compare with similar GPUs using our comparison tool above. Key factors: VRAM for model size, TFLOPS for speed, and price for budget.