Data Center NVIDIA

NVIDIA H100 PCIe

Hopper Architecture · 80GB HBM2e · PCIe

VRAM
80GB
FP16
102.0
TDP
350W
Hardware Price
$28k
MSRP: $25k
Cloud from
$2.39/hr
3 providers
Cheapest at RunPod →

Quick Insights

Performance/Dollar
3.64 TFLOPS/$k
FP16 performance per $1000
VRAM/Dollar
2.9 GB/$k
VRAM per $1000
vs Data Center Average
-43% perf
FP16 TFLOPS comparison
Cloud Availability
3 providers
from $2.39/hr

Specifications

VRAM 80GB HBM2e
Memory Bandwidth 2.0 TB/s
FP16 TFLOPS 102.0
Tensor TFLOPS 1.5k
FP32 TFLOPS 51.0
TDP 350W
Form Factor PCIe
Architecture Hopper
NVLink No
Release Date 2022-09

Buy vs Rent Analysis

Buy Hardware
$28k
  • One-time cost, unlimited usage
  • Full control over hardware
  • Electricity & cooling costs extra
  • Depreciation over 2-3 years
Best if using >11715 hours total
Rent Cloud GPU
$2.39/hr
  • Pay only for what you use
  • No upfront investment
  • Scale up/down instantly
  • No maintenance required
Best for <11715 hours or variable usage
Breakeven Point
11,715
hours of usage

At $2.39/hr cloud pricing, buying the hardware pays off after 11,715 hours (~488 days or 16.3 months of 24/7 usage).

Usage Monthly Cloud Cost Months to Breakeven
100 hrs/month $239.00 118 months
200 hrs/month $478.00 59 months
500 hrs/month $1.2k 24 months

Cloud GPU Pricing

Rent NVIDIA H100 PCIe from 3 cloud providers. Prices shown per GPU per hour.

Provider Type Instance GPUs On-Demand Per GPU Spot Availability
RunPod gpu-cloud NVIDIA H100 PCIe 1x $2.39/hr $2.39/hr Cheapest $1.25/hr (-48%) -
Lambda Labs gpu-cloud lambda-h100-pcie 1x $2.49/hr $2.49/hr - -
CoreWeave gpu-cloud coreweave-h100-pcie 1x $4.25/hr $4.25/hr - -
Best Spot Deal: RunPod offers spot pricing at $1.25/hr (48% off on-demand).

H100 PCIe vs Alternatives

Compare NVIDIA H100 PCIe with similar GPUs from other brands.

GPU VRAM FP16 TFLOPS Bandwidth Hardware Price Cloud Price
H100 PCIe Current 80GB 102.0 2.0 TB/s $28k - -
AMD Instinct MI210 AMD 64GB (-20%) 181.0 (+77%) 1.6 TB/s - - Compare
MI300 AMD 128GB (+60%) 490.3 (+381%) 5.3 TB/s $15k (-46%) - Compare
AMD Instinct MI300A AMD 128GB (+60%) 980.0 (+861%) 5.3 TB/s - - Compare
AMD Instinct MI250X AMD 128GB (+60%) 383.0 (+275%) 3.3 TB/s - - Compare
AMD Instinct MI250 AMD 128GB (+60%) 362.0 (+255%) 3.3 TB/s - - Compare

Best Use Cases

No specific use case recommendations for NVIDIA H100 PCIe yet.

Browse All Use Cases →

Compare H100 PCIe

Other NVIDIA GPUs
Alternatives

Frequently Asked Questions about H100 PCIe

The NVIDIA H100 PCIe has a market price of approximately $28k (MSRP: $25k). Cloud rental starts at $2.39/hr. Prices may vary based on retailer, region, and availability.

Yes, the NVIDIA H100 PCIe with 80GB VRAM is suitable for many AI/ML workloads. For large language models, you may need multiple GPUs or consider higher-VRAM options like A100 or H100.

The breakeven point is approximately 11,715 hours of usage. Buy if you'll use it more than this; rent for shorter projects or variable workloads. Cloud rental from RunPod starts at $2.39/hr.

With 80GB VRAM and 102.0 FP16 TFLOPS, the NVIDIA H100 PCIe can run: Large language models (7B-13B), Stable Diffusion XL, video AI, and professional 3D rendering.

The NVIDIA H100 PCIe offers 80GB VRAM and 102.0 FP16 performance at $28k. Compare with similar GPUs using our comparison tool above. Key factors: VRAM for model size, TFLOPS for speed, and price for budget.