Request access to the cluster
To access the cluster and use its computing and storage resources, you must first request an account by sending an email to:
soporte.sci@unican.es
Please include the following information in your request:
- Full name:
- Email address:
- Department / Research group:
- SSH public key: (see below)
SSH Key Requirement
Access to the cluster is only allowed via SSH key-based authentication. Password logins are disabled.
If you already have an SSH key pair, simply attach your public key file to your request email.
If you need to generate a new SSH key pair, run the following command on your local computer:
ssh-keygen -t ed25519
You can press Enter to accept the default file location (~/.ssh/id_ed25519). Once generated, attach the public key file (id_ed25519.pub) to your access request.
Once you have access to the cluster, you can allow any aditional SSH public key to authenticate on the User Interface node by following these instructions.
Access and Use of the Cluster
Once your request is approved, you’ll receive your assigned username and access confirmation.
You can log in to the cluster’s user interface node (front-end) using SSH:
[user@localmachine]$ ssh user@ui.sci.unican.es
Access from Windows
If you are using Windows, you can connect using an SSH client such as PuTTY. In this documentation page you can find the instructions for accesing the cluster using PuTTY.
Once you have succesfully logged in
Once connected to the front-end node, you can:
- Submit jobs to the compute nodes using the cluster’s scheduler (slurm).
- Access shared storage areas (e.g., $HOME, lustre).
- Transfer files using scp or rsync.
Cluster File Systems
The cluster provides several file systems optimized for different purposes. Each user has access to a home directory, a personal working area, and a shared data lake.
| PATH | File System | Size | Quota | Purpose |
|---|---|---|---|---|
$HOME |
NFS | 120G | Permanent home directory (configuration files, scripts, source code) | |
/lustre/$GROUP/WORK/$USER |
LUSTRE | 246T | 10T bytes / 1M files | Personal working storage |
/lustre/$GROUP/DATA |
LUSTRE | 313T | 10T bytes / 1M files | Common and supervised data lake |
User HOME Directories
The home directory is located on an NFS file system and is accessible from all nodes of the cluster.
/nfs/home/$GROUP/$USER
It is designed for:
- Storing configuration files (.bashrc, .ssh/, etc.)
- Small scripts and source code
- Job submission files
Avoid storing large data files or running I/O-intensive operations from your home directory. Use your Personal Working Storage directory instead.
Personal Working Storage
Each user has a WORK directory on the Lustre file system:
/lustre/$GROUP/WORK/$USER
This space is:
- Accessible from all cluster nodes
- Intended for temporary and working data
- Optimized for high-performance parallel I/O
Each user has a storage quota and a file count limit. You can check your current usage with:
[user@ui ~]$ lfs quota -hu $USER /lustre/$GROUP/WORK
Disk quotas for usr usuario (uid 15999):
Filesystem used quota limit grace files quota limit grace
/lustre/geocean/WORK
1.197T 10T 15T - 27149 1024000 1228800 -
- used: Current usage
- quota: Soft limit (you will be warned when exceeded)
- limit: Hard limit (cannot be exceeded)
- files: Number of inodes used
Shared Data Lake
Each research group also has a shared directory:
/lustre/$GROUP/DATA
This area is intended for:
- Long-term datasets shared among group members
- Common input/output data used in multiple projects
- Data supervised by the group’s PI or data manager
Access is group-wide, and usage is typically monitored by the administrators.
The DATA space is not intended for scratch computations or temporary job files — use your Personal Working Storage area for that.
UI Node Resource Usage Limits
To ensure fair usage and system stability, the following resource limits apply to all users on the login (UI) node.
You can inspect your current limits using the ulimit command:
ulimit -a
Enforced Limits
| Resource | Soft Limit | Hard Limit | Description |
|---|---|---|---|
| Memory (virtual) | 2 GB | 4 GB | Total virtual memory per process |
| Processes (nproc) | 512 | 1024 | Maximum number of processes or threads |
| CPU time | 120 minutes | 240 minutes | Max CPU time per process (accumulated) |
- Soft limits can be increased by the user, up to the hard limit.
- Hard limits are the maximum values enforced by the system.
For heavy computations or long-running tasks, use the Slurm batch system instead of the login node.
Cron Jobs on the UI Node
Users are allowed to schedule background jobs using the crond service on the login node.
Editing your personal cron jobs:
crontab -e
Viewing your scheduled jobs:
crontab -l
Note: Cron jobs are subject to the same resource limits as interactive sessions. Avoid running heavy jobs via cron on the UI node.
Tips
- Use
srun,sbatch, orsallocfor compute jobs. - Use cron only for lightweight automation (e.g., syncing files, sending notifications, or monitoring job status).