Docs
DataCrunch HomeSDKAPILogin / Signup
  • Welcome to DataCrunch
    • Overview
    • Locations and Sustainability
    • Support
  • GPU Instances
    • Set up a GPU instance
    • Securing Your Instance
    • Shutdown, Hibernate, and Delete
    • Adding a New User
    • Block Volumes
    • Shared Filesystems (SFS)
    • Managing SSH Keys
    • Connecting to Your DataCrunch.io Server
    • Connecting to Jupyter notebook with VS Code
    • Team Projects
    • Pricing and Billing
  • Clusters
    • Instant Clusters
      • Deploying a GPU cluster
      • Slurm
      • Spack
      • Good to know
    • Customized GPU clusters
  • Containers
    • Overview
    • Container Registries
    • Scaling and health-checks
    • Batching and Streaming
    • Async Inference
    • Tutorials
      • Quick: Deploy with vLLM
      • In-Depth: Deploy with TGI
      • In-Depth: Deploy with SGLang
      • In-Depth: Deploy with vLLM
      • In-Depth: Deploy with Replicate Cog
      • In-Depth: Asynchronous Inference Requests with Whisper
  • Inference
    • Overview
    • Authorization
    • Audio Models
      • Whisper X
  • Pricing and Billing
  • Resources
    • Resources Overview
    • DataCrunch API
  • Python SDK
  • Get Free Compute Credits
Powered by GitBook
On this page
  • Installation
  • Basic usage
  • Example commands
  • Troubleshooting

Was this helpful?

  1. Clusters
  2. Instant Clusters

Spack

Last updated 25 days ago

Was this helpful?

is an open-source package manager that allows the developers to easily manage multiple versions of the same software and its dependencies, for example by allowing to quickly switch between multiple CUDA or gcc versions.

Installation

Let's get started with Spack on DataCrunch On-demand cluster!

On your first boot, Spack is not added to your shell by default. To initialize Spack, please run:

. /home/spack/spack/share/spack/setup-env.sh

The above can also be added to your .bashrc to have Spack commands always available on login.

Basic usage

We recommend you consult to learn more about its features. Below, we provide some basic examples.

You can make the specific version of a package active by running spack load package@version and conversely, deactivating it by running spack unload package. Behind the scenes, Spack handles the above by prepending to the $PATH environment variable.

Example commands

List the currently installed packages:

spack find

On a freshly installed cluster this should provide output similar to:

To find info on all installable versions of Nvidia CUDA:

spack info cuda

To install CUDA version 12.6.2:

spack install cuda@12.6.2

Once the package has been installed load it with:

spack load cuda@12.6.2

Verify that the package has been loaded:

spack find --loaded | grep cuda

Now the Nvidia CUDA Compiler path will be set to the Spack module corresponding to the loaded CUDA version:

which nvcc

Should output something like:

/home/spack/spack/opt/.../cuda-12.6.2-fguwwqog63caubyxg2q4mgcce5n5rmrv/bin/nvcc

Troubleshooting

The Spack setup script is available here: /usr/local/bin/spack.setup.sh. If you for any reason delete your /home/spack directory, you can recreate it by running the above script.

We recommend you read spack.setup.sh before using it. It for example sets cuda_arch=90a which is not the right choice for NVIDIA GPU generations newer than H200.

Spack
Spack documentation