Skip to content

Latest commit

 

History

History

CryoSPARC

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

CryoSPARC

See FASRC Docs

Configure CryoSPARC Master

The master program is called with the cryosparcm command documented here. The major mechanism for customizing the behavior of cryosparcm is the config file located in cryosparc_master/config.sh. The following is a basic config.sh containing the license, path to the MongoDB database, master hostname, and the base tcp port:

export CRYOSPARC_LICENSE_ID="________-____-____-____-____________"
export CRYOSPARC_DB_PATH="_________________________________________/cryosparc_database"
export CRYOSPARC_DEVELOP=false
export CRYOSPARC_INSECURE=false
export CRYOSPARC_CLICK_WRAP=true
export CRYOSPARC_MASTER_HOSTNAME=holy_______.rc.fas.harvard.edu
export CRYOSPARC_BASE_PORT=____

Installing CryoSPARC on Cannon

There is a provided configure.sh script to get you up and running fast, or proceed interactively if preferred.

The install script is based on the instructions found in the CryoSPARC guide. This script should typically be run on a GPU node so that the correct CUDA modules are loaded and functioning. Sequentially, this script does the following steps:

  1. Set environment variables particular to this install
  2. Remove any potential old files that will cause the install to fail
  3. Load the appropriate CUDA related modules
  4. Make the installation directories
  5. Download the CryoSPARC Master and Worker binaries
  6. Unpack the binaries
  7. Install the master binary
  8. Install the worker binary
  9. Add an initial user account
  10. Add the cluster configuration for running jobs on Cannon

Running cryosparc on Cannon

To run CryoSPARC on Cannon, several things need to happen. First an open port on the login node in the range 7000-11000 needs to be identified (see here). Next a CPU node should be allocated using the SLURM job scheduler. The CryoSPARC Master configuration file, $INSTALL_DIR/cryosparc_master/config.sh needs to be modified to reflect the available port and node hostname. Then the cryosparcm master process can be started on the CPU node. In order to adapt the MongoDB database to a potentially new port number, the following commands need to be run once the Master has started.

cryosparcm fixdbport
cryosparcm restart

Then the user must set up an SSH tunnel from their local machine through the login node to the compute node. Note that this can also be done using a VDI session as your login node. The VDI instance will have better performance. Just substitute the name of the VDI node for the login node name.

ssh -NL port:compute_node:port [email protected]

Once authenticated, the CryoSPARC webapp should be viewable at http://localhost:port.

Mostly Automated CryoSPARC Connection

Because the launch process is tedious and error prone, I have automated much of it in a shell function which the user can add to their cluster .bashrc.

This uses a temporary file CRYOSPARC_CONNECTION_SCRIPT in order to monitor the progress of the cryosparcm launch. This script is written out into the user's home directory after the server initializes.

To connect from your local terminal, add the following to your local .bashrc.

connect_to_cryosparc()
{
    LOGIN=$holylogin.rc.fas.harvard.edu
    USERNAME=______________
    CONNECTION_SCRIPT=".cryosparc_connection_script.sh"
    rsync $USERNAME@:~/$CONNECTION_SCRIPT . 
    chmod +x $CONNECTION_SCRIPT
    ./$CONNECTION_SCRIPT
}

Using these two shell functions, CryoSPARC is launched in two steps:

  1. On the login node, type launchcryosparc
  2. Once, the launch function returns, type connect_to_cryosparc on the local machine