Access and Use of the Cluster
Access to the cluster is done through a user interface machine (front-end), and it is accessed via SSH:
[user@localmachine]$ ssh user@ui.sci.unican.es
Alternatively, you can use the PuTTY client application from Windows. You will be prompted to enter your password to access the service.
Access with Encrypted Key (recommended)
If you want to access without entering a password, you can configure your computer to be recognized using an encrypted key:
ssh-keygen -t rsa -b 2048 -f $HOME/.ssh/geocean -N ""
Next, copy the key to the cluster:
ssh-copy-id -i $HOME/.ssh/key.pub user@ui.sci.unican.es
In some cases (macOS users and older computers), it is necessary to add:
ssh-add ~/.ssh/geocean
File Systems
PATH | File System | Size | Quota | User | Purpose |
---|---|---|---|---|---|
$HOME |
NFS | 120G | Home directories | ||
/lustre/$GROUP/WORK/$USER |
LUSTRE | 246T | 10T bytes / 1M files | Personal temporary and working storage | |
/lustre/$GROUP/DATA |
LUSTRE | 313T | 10T bytes / 1M files | Common and supervised data lake |
User HOME Directories
The user's HOME directory is shared and accessible from all nodes:
/home/grupos/$GROUP/$USER
: User's HOME directory.
Temporary and Working Storage Space
The user's WORK directory is accessible from all nodes of the cluster. Its purpose is to provide a personal space for temporary and working storage.
Each user is assigned a storage quota and a limit on the number of files, which can be checked using the LUSTRE lfs quota
command:
[user@ui ~]$ lfs quota -hu $USER /lustre/$GROUP/WORK
Disk quotas for usr usuario (uid 15999):
Filesystem used quota limit grace files quota limit grace
/lustre/geocean/WORK
1.197T 10T 15T - 27149 1024000 1228800 -
UI Node Resource Usage Limits
To ensure fair usage and system stability, the following resource limits apply to all users on the login (UI) node.
You can inspect your current limits using the ulimit
command:
ulimit -a
Enforced Limits
Resource | Soft Limit | Hard Limit | Description |
---|---|---|---|
Memory (virtual) | 2 GB | 4 GB | Total virtual memory per process |
Processes (nproc) | 512 | 1024 | Maximum number of processes or threads |
CPU time | 120 minutes | 240 minutes | Max CPU time per process (accumulated) |
- Soft limits can be increased by the user, up to the hard limit.
- Hard limits are the maximum values enforced by the system.
For heavy computations or long-running tasks, use the Slurm batch system instead of the login node.
Cron Jobs on the UI Node
Users are allowed to schedule background jobs using the crond
service on the login node.
Editing your personal cron jobs:
crontab -e
Viewing your scheduled jobs:
crontab -l
Note: Cron jobs are subject to the same resource limits as interactive sessions. Avoid running heavy jobs via cron on the UI node.
Tips
- Use
srun
,sbatch
, orsalloc
for compute jobs. - Use cron only for lightweight automation (e.g., syncing files, sending notifications, or monitoring job status).