Usage of a HPC cluster: Difference between revisions

From Wiki Max
Jump to navigation Jump to search
Line 13: Line 13:
These are the main logical building blocks:
These are the main logical building blocks:


* a 'login node' is exposed to users for access (typically via <code>ssh</code>),  
* a ''login node'' is exposed to users for access (typically via <code>ssh</code>),  
* a dedicated 'scheduler' (the 'queuing system') dispatches computational jobs to the 'compute nodes'
* a dedicated ''scheduler'' (the ''queuing system'') dispatches computational jobs to the ''compute nodes''
* computation happens therefore asynchronously, and not on the login node.
* computation happens therefore asynchronously (in batch mode), and not on the login node.
* a specific 'software environment' is provided on the login node and on the compute node to run parallel jobs
* a specific ''software environment'' is provided on the login node and on the compute node to run parallel jobs


==Connecting==
==Connecting==

Revision as of 07:10, 30 April 2021


Here we collect some general (and by no means complete) information about usage and policies of a HPC cluster.

Structure of a HPC cluster

The structure of a HPC system is sketched in the picture above. These are the main logical building blocks:

  • a login node is exposed to users for access (typically via ssh),
  • a dedicated scheduler (the queuing system) dispatches computational jobs to the compute nodes
  • computation happens therefore asynchronously (in batch mode), and not on the login node.
  • a specific software environment is provided on the login node and on the compute node to run parallel jobs

Connecting

Unless other means are provided, you typically connect using the ssh protocol.

From a shell terminal or a suitable app:

 ssh -Y <user>@<machine_host_name>  or
 ssh -Y -l <user> <machine_host_name>
 <user>:  Unix username on the cluster login node