UC Scientific Computing Service

SCI-UC: Servicio de computación AVANZADA para la investigación

Here you will find all the information and resources related to our projects and services.


Quick Start

To begin, visit the Access and Use of the Cluster guide and the Slurm User guide. If you have any questions, feel free to contact us at soporte.sci@unican.es.


SCI

The Scientific Computing Service (SCI) at the University of Cantabria is a specialized unit that provides technical support and advanced computing services to the university's research groups, organizations, and companies involved in research and innovation. Our main mission is to facilitate access to and efficient use of high-performance computing (HPC) infrastructures to accelerate scientific and technological advances.

What We Do?

We offer a wide range of services, including:

  • HPC system installation and configuration: We implement and optimize high-performance computing environments tailored to the specific needs of researchers.
  • Maintenance and technical support: We handle monitoring, maintenance, and upgrades of the computing infrastructure to ensure optimal performance and minimize downtime.
  • Specialized consulting: We work closely with research groups to provide tailored solutions, advise on the efficient use of resources, and ensure they can fully leverage HPC capabilities.
  • Equipment procurement consulting: We offer expert guidance on hardware and software purchases to ensure research groups have the appropriate technology for their projects.

Infrastructure

Data Center rooms

80 m2 divided into 2 work areas:

  • Assembly area (equipment assembly and repair)
  • Operation area ("cold" room): CUBO AP (Schneider Electric)
  • 23 racks (IT)
  • 4 Cooling racks (InROWs) & Free Cooling System
    • Up 120Kw of cooling power (70 free cooling)
  • 3 Power racks (UPS + PDUs)
    • Up 320KW of electrical power

Stack hardware

HPC Servers:

  • > 90 compute working nodes
    • > 4600 cores and 15 TB (RAM)
  • 2 GPU working nodes
    • 2 Nvidia Quatro RTX 4000 GPUs

Networking:

  • 1/10Gbps Ethernet Network
  • 200Gbps HPC Infiniband Network for computation and data storage

Storage:

  • Lustre FS: +1.2PB (Hight performace remote store)
  • NFS FS: +80TB (Home remote store)
  • Local scratch (SSD/SATA): +100TB
  • NAS servers: 2 units with 46+64TB (Backups)

Stack Base software

  • All working nodes are running Rocky Linux 8.10

Stack HPC software

Objective:

  • Hide hardware complexity from the end user
  • Facilitate the usability of the overall system
  • Simplify management

Our main HPC management platforms:

  • Slurm
  • Spack

System Management Properties:

  • Fully custom-developed: learning and optimization
  • Exclusively based on open-source software
  • Server virtualization
  • Continuous process of evolution and improvement
  • 99% of the software is free

Our main infrastructure management platforms:

  • XCP-ng & Xen Orchesta (XOA)
  • Docker
  • Rocky Linux 8.10 & 9.6
  • Ganglia monitor & Nagios core

Involved Research Groups


Contact: soporte.sci@unican.es