SCI-UC Advanced Computing Resources

For UC users who do not have access to their own computing resources but require temporary access to advanced computing infrastructure to run experiments, SCI-UC provides access to the following shared resource environment:


Intel Xeon Westmere EP group

Working nodes

ce{100-115}

System Architecture

  • Compute nodes: 16 working nodes
  • Enclosures (chassis): 4 × SUPERMICRO SuperServer 6026TT-HTRF (4 nodes per enclosure)

Node Configuration

Component Specification Technical Notes
Processor 2 x Intel Xeon® E5645 Westmere EP
Physical Cores 12 cores per node Hyper-Threading (x2)
RAM 54 GB DDR3 ECC
Local Storage 100 GB SATA Scratch space (No SSD)
Network Dual Gigabit Ethernet Management & Data Interconnect
Motherboard X8DTT-HF+ Supermicro

System Capacity

  • Total nodes: 16
  • Total logical cores: 384 (including SMT)

Usage Model

This system is a shared resource managed by the SLURM workload manager and is primarily intended for:

  • Serial workloads
  • Low-scale parallel jobs

A complete software stack is available to support parallel execution, including MPI and OpenMP.

The system is best suited for:

  • Serial applications
  • Embarrassingly parallel workloads
  • MPI applications with low to moderate communication requirements (communication-light)
  • Hybrid MPI/OpenMP workloads with emphasis on intra-node parallelism
  • Code development, testing, and debugging
  • Small- to medium-scale benchmarking and scalability studies

Limitations

Due to hardware and network characteristics (legacy architecture, Gigabit Ethernet), the following workloads are not recommended:

  • Strongly coupled MPI applications with high communication demands
  • Latency-sensitive workloads
  • Large-scale distributed jobs requiring high-speed interconnects
  • Memory-intensive applications requiring large per-node memory
  • I/O-intensive workloads

Usage

For usage instructions, refer to SCI-UC Slurm manual

For additional information or support, please contact: soporte.sci@unican.es