Docs
DataCrunch HomeSDKAPILogin / Signup
  • Welcome to DataCrunch
    • Overview
    • Locations and Sustainability
    • Support
  • GPU Instances
    • Set up a GPU instance
    • Securing Your Instance
    • Shutdown, Hibernate, and Delete
    • Adding a New User
    • Block Volumes
    • Shared Filesystems (SFS)
    • Managing SSH Keys
    • Connecting to Your DataCrunch.io Server
    • Connecting to Jupyter notebook with VS Code
    • Team Projects
    • Pricing and Billing
  • Clusters
    • Instant Clusters
      • Deploying a GPU cluster
      • Slurm
      • Spack
      • Good to know
    • Customized GPU clusters
  • Containers
    • Overview
    • Container Registries
    • Scaling and health-checks
    • Batching and Streaming
    • Async Inference
    • Tutorials
      • Quick: Deploy with vLLM
      • In-Depth: Deploy with TGI
      • In-Depth: Deploy with SGLang
      • In-Depth: Deploy with vLLM
      • In-Depth: Deploy with Replicate Cog
      • In-Depth: Asynchronous Inference Requests with Whisper
  • Inference
    • Overview
    • Authorization
    • Audio Models
      • Whisper X
  • Pricing and Billing
  • Resources
    • Resources Overview
    • DataCrunch API
  • Python SDK
  • Get Free Compute Credits
Powered by GitBook
On this page

Was this helpful?

  1. Clusters

Instant Clusters

Last updated 9 days ago

Was this helpful?

You can now deploy high-performance GPU training clusters with Infiniband interconnect from your , the same way you would deploy a single GPU instance.

Available contract lengths are: 1 day, 1 week, 2 weeks, 4 weeks. By default, all contracts convert to Pay As You Go after the initial contract duration runs out, making it easy to use the cluster for as long as necessary.

Instant clusters are available with Nvidia H200 SXM5 GPUs, a 3.2 Tb/s Infiniband interconnect per node (eight 400 Gb/s links), and a 100 Gbit/s Ethernet network. The uplink to the Internet is symmetric 1 Gb/s.

Our instant clusters range from 16 to 64 GPUs. Each cluster has up to eight worker nodes, with 8 GPUs per worker node, and one jump host. Each worker node has local NVMe storage and access to a configurable shared filesystem with up to 50TB of storage.

Clusters have pre-installed for easy job management. The instant clusters are currently available in ICE-01 location, with other locations available later.

DataCrunch cloud dashboard
Slurm
LogoGPU Clusters - DataCrunch