A. Pérez-Calero Yzquierdo
YOU?
Author Swipe
View article: Analysis Facilities for the HL-LHC White Paper
Analysis Facilities for the HL-LHC White Paper Open
This white paper presents the current status of the R&D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) So…
View article: Operational experience from the Spanish CMS Analysis Facility at CIEMAT
Operational experience from the Spanish CMS Analysis Facility at CIEMAT Open
The anticipated surge in data volumes generated by the LHC in the coming years, especially during the High-Luminosity LHC phase, will reshape how physicists conduct their analysis. This necessitates a shift in programming paradigms and tec…
View article: Commissioning and exploitation of the MareNostrum5 cluster at the Barcelona Supercomputing Center for CMS computing
Commissioning and exploitation of the MareNostrum5 cluster at the Barcelona Supercomputing Center for CMS computing Open
The MareNostrum 5 (MN5) is the newly deployed pre-exascale EuroHPC supercomputer hosted at the Barcelona Supercomputing Center (BSC) in Spain. Its 750,000-core general-purpose CPU cluster offers new opportunities for CMS data processing an…
View article: Exploiting GPU Resources at VEGA for CMS Software Validation
Exploiting GPU Resources at VEGA for CMS Software Validation Open
In recent years, the CMS experiment has expanded the usage of HPC systems for data processing and simulation activities. These resources significantly extend the conventional pledged Grid compute capacity. Within the EuroHPC program, CMS a…
View article: Optimization of distributed compute resources utilization in the CMS Global Pool
Optimization of distributed compute resources utilization in the CMS Global Pool Open
The CMS Submission Infrastructure is the primary system for managing computing resources for CMS workflows, including data processing, simulation, and analysis. It integrates geographically distributed resources from Grid, HPC, and cloud p…
View article: Analysis Facilities White Paper
Analysis Facilities White Paper Open
This white paper presents the current status of the R&D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) So…
View article: Adoption of a token-based authentication model for the CMS Submission Infrastructure
Adoption of a token-based authentication model for the CMS Submission Infrastructure Open
The CMS Submission Infrastructure (SI) is the main computing resource provisioning system for CMS workloads. A number of HTCondor pools are employed to manage this infrastructure, which aggregates geographically distributed resources from …
View article: Repurposing of the Run 2 CMS High Level Trigger Infrastructure as a Cloud Resource for Offline Computing
Repurposing of the Run 2 CMS High Level Trigger Infrastructure as a Cloud Resource for Offline Computing Open
The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to CMS compute resources, providing about 25k job slots for offline computing. This CPU farm was initially employed as an opportunistic resource, exploit…
View article: HPC resources for CMS offline computing: An integration and scalability challenge for the Submission Infrastructure
HPC resources for CMS offline computing: An integration and scalability challenge for the Submission Infrastructure Open
The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into the HL-LHC era. The landscape of available resources will also evolve, as High Performance Computing (HPC) and Cloud r…
View article: A case study of content delivery networks for the CMS ex-periment
A case study of content delivery networks for the CMS ex-periment Open
In 2029 the LHC will start the high-luminosity LHC program, with a boost in the integrated luminosity resulting in an unprecedented amount of ex- perimental and simulated data samples to be transferred, processed and stored in disk and tap…
View article: Integration of the Barcelona Supercomputing Center for CMS computing: Towards large scale production
Integration of the Barcelona Supercomputing Center for CMS computing: Towards large scale production Open
The CMS experiment is working to integrate an increasing number of High Performance Computing (HPC) resources into its distributed computing infrastructure. The case of the Barcelona Supercomputing Center (BSC) is particularly challenging …
View article: The integration of heterogeneous resources in the CMS Submission Infrastructure for the LHC Run 3 and beyond
The integration of heterogeneous resources in the CMS Submission Infrastructure for the LHC Run 3 and beyond Open
While the computing landscape supporting LHC experiments is currently dominated by x86 processors at WLCG sites, this configuration will evolve in the coming years. LHC collaborations will be increasingly employing HPC and Cloud facilities…
View article: The Spanish CMS Analysis Facility at CIEMAT
The Spanish CMS Analysis Facility at CIEMAT Open
The increasingly larger data volumes that the LHC experiments will accumulate in the coming years, especially in the High-Luminosity LHC era, call for a paradigm shift in the way experimental datasets are accessed and analyzed. The current…
View article: Extending the distributed computing infrastructure of the CMS experiment with HPC resources
Extending the distributed computing infrastructure of the CMS experiment with HPC resources Open
Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS…
View article: HSF IRIS-HEP Second Analysis Ecosystem Workshop Report
HSF IRIS-HEP Second Analysis Ecosystem Workshop Report Open
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look at progress and continuing challenges in scaling up HEP analysis to meet the needs of HL-LHC and DUNE, as well as the very pressing nee…
View article: HSF IRIS-HEP Second Analysis Ecosystem Workshop Report
HSF IRIS-HEP Second Analysis Ecosystem Workshop Report Open
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look at progress and continuing challenges in scaling up HEP analysis to meet the needs of HL-LHC and DUNE, as well as the very pressing nee…
View article: Resource provisioning and workload scheduling of CMS Offline Computing
Resource provisioning and workload scheduling of CMS Offline Computing Open
The CMS experiment requires vast amounts of computational capacity in order to generate, process and analyze the data coming from proton-proton collisions at the Large Hadron Collider, as well as Monte Carlo simulations. CMS computing need…
View article: Reaching new peaks for the future of the CMS HTCondor Global Pool
Reaching new peaks for the future of the CMS HTCondor Global Pool Open
The CMS experiment at CERN employs a distributed computing infrastructure to satisfy its data processing and simulation needs. The CMS Submission Infrastructure team manages a dynamic HTCondor pool, aggregating mainly Grid clusters worldwi…
View article: Exploitation of network-segregated CPU resources in CMS
Exploitation of network-segregated CPU resources in CMS Open
CMS is tackling the exploitation of CPU resources at HPC centers where compute nodes do not have network connectivity to the Internet. Pilot agents and payload jobs need to interact with external services from the compute nodes: access to …
View article: Lightweight site federation for CMS support
Lightweight site federation for CMS support Open
There is a general trend in WLCG towards the federation of resources, aiming for increased simplicity, efficiency, flexibility, and availability. Although general VO-agnostic federation of resources between two independent and autonomous r…
View article: CMS data access and usage studies at PIC Tier-1 and CIEMAT Tier-2
CMS data access and usage studies at PIC Tier-1 and CIEMAT Tier-2 Open
The current computing models from LHC experiments indicate that much larger resource increases would be required by the HL-LHC era (2026+) than those that technology evolution at a constant budget could bring. Since worldwide budget for co…
View article: Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era
Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era Open
Efforts in distributed computing of the CMS experiment at the LHC at CERN are now focusing on the functionality required to fulfill the projected needs for the HL-LHC era. Cloud and HPC resources are expected to be dominant relative to res…
View article: Exploiting network restricted compute resources with HTCondor: a CMS experiment experience
Exploiting network restricted compute resources with HTCondor: a CMS experiment experience Open
In view of the increasing computing needs for the HL-LHC era, the LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing and making efficient use of Cloud and High Performance Computing (H…
View article: CMS strategy for HPC resource exploitation
CMS strategy for HPC resource exploitation Open
High Energy Physics (HEP) experiments will enter a new era with the start of the HL-LHC program, with computing needs surpassing by large factors the current capacities. Anticipating such scenario, funding agencies from participating count…
View article: Exploiting CRIC to streamline the configuration management of GlideinWMS factories for CMS support
Exploiting CRIC to streamline the configuration management of GlideinWMS factories for CMS support Open
GlideinWMS is a workload management and provisioning system that allows sharing computing resources distributed over independent sites. Based on the requests made by GlideinWMS frontends, a dynamically sized pool of resources is created by…
View article: Improving efficiency of analysis jobs in CMS
Improving efficiency of analysis jobs in CMS Open
Hundreds of physicists analyze data collected by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider using the CMS Remote Analysis Builder and the CMS global pool to exploit the resources of the Worldwide LHC Computing …
View article: Exploring GlideinWMS and HTCondor scalability frontiers for an expanding CMS Global Pool
Exploring GlideinWMS and HTCondor scalability frontiers for an expanding CMS Global Pool Open
The CMS Submission Infrastructure Global Pool, built on Glidein-WMS andHTCondor, is a worldwide distributed dynamic pool responsible for the allocation of resources for all CMS computing workloads. Matching the continuously increasing dema…
View article: Producing Madgraph5_aMC@NLO gridpacks and using TensorFlow GPU resources in the CMS HTCondor Global Pool
Producing Madgraph5_aMC@NLO gridpacks and using TensorFlow GPU resources in the CMS HTCondor Global Pool Open
The CMS experiment has an HTCondor Global Pool, composed of more than 200K CPU cores available for Monte Carlo production and the analysis of da.The submission of user jobs to this pool is handled by either CRAB, the standard workflow mana…
View article: Improving the Scheduling Efficiency of a Global Multi-Core HTCondor Pool in CMS
Improving the Scheduling Efficiency of a Global Multi-Core HTCondor Pool in CMS Open
Scheduling multi-core workflows in a global HTCondor pool is a multi-dimensional problem whose solution depends on the requirements of the job payloads, the characteristics of available resources, and the boundary conditions such as fair s…
View article: The LHC Tier-1 at PIC: ten years of operations
The LHC Tier-1 at PIC: ten years of operations Open
This paper summarizes ten years of operational experience of the WLCG Tier-1 computer centre at Port d’Informació Científica (PIC), which serves the ATLAS, CMS and LHCb experiments. The centre, located in Barcelona (Spain), has supported a…