E. Karavakis
YOU?
Author Swipe
View article: iDDS: Intelligent Distributed Dispatch and Scheduling for Workflow Orchestration
iDDS: Intelligent Distributed Dispatch and Scheduling for Workflow Orchestration Open
The intelligent Distributed Dispatch and Scheduling (iDDS) service is a versatile workflow orchestration system designed for large-scale, distributed scientific computing. iDDS extends traditional workload and data management by integratin…
View article: Preparation of the Multi-Site Data Processing at the Vera C. Rubin Observatory
Preparation of the Multi-Site Data Processing at the Vera C. Rubin Observatory Open
The Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST) Camera is scheduled to start taking data in the summer of 2025. The Data Release Production will run the LSST Science Pipe software at data facilities in the US, France…
View article: Modernizing ATLAS PanDA for a sustainable multi-experiment future
Modernizing ATLAS PanDA for a sustainable multi-experiment future Open
In early 2024, ATLAS undertook an architectural review to evaluate the functionalities of its current components within the workflow and workload management ecosystem. Pivotal to the review was the assessment of the Production and Distribu…
View article: PanDA: Production and Distributed Analysis System
PanDA: Production and Distributed Analysis System Open
View article: Distributed Machine Learning Workflow with PanDA and iDDS in LHC ATLAS
Distributed Machine Learning Workflow with PanDA and iDDS in LHC ATLAS Open
Machine Learning (ML) has become one of the important tools for High Energy Physics analysis. As the size of the dataset increases at the Large Hadron Collider (LHC), and at the same time the search spaces become bigger and bigger in order…
View article: Overview of the distributed image processing infrastructure to produce the Legacy Survey of Space and Time
Overview of the distributed image processing infrastructure to produce the Legacy Survey of Space and Time Open
The Vera C. Rubin Observatory is preparing to execute the most ambitious astronomical survey ever attempted, the Legacy Survey of Space and Time (LSST). Currently the final phase of construction is under way in the Chilean Andes, with the …
View article: Integrating the PanDA Workload Management System with the Vera C. Rubin Observatory
Integrating the PanDA Workload Management System with the Vera C. Rubin Observatory Open
The Vera C. Rubin Observatory will produce an unprecedented astronomical data set for studies of the deep and dynamic universe. Its Legacy Survey of Space and Time (LSST) will image the entire southern sky every three to four days and prod…
View article: Utilizing Distributed Heterogeneous Computing with PanDA in ATLAS
Utilizing Distributed Heterogeneous Computing with PanDA in ATLAS Open
In recent years, advanced and complex analysis workflows have gained increasing importance in the ATLAS experiment at CERN, one of the large scientific experiments at LHC. Support for such workflows has allowed users to exploit remote comp…
View article: Integrating the PanDA Workload Management System with the Vera C. Rubin Observatory
Integrating the PanDA Workload Management System with the Vera C. Rubin Observatory Open
The Vera C. Rubin Observatory will produce an unprecedented astronomical data set for studies of the deep and dynamic universe. Its Legacy Survey of Space and Time (LSST) will image the entire southern sky every three to four days and prod…
View article: Overview of the distributed image processing infrastructure to produce the Legacy Survey of Space and Time
Overview of the distributed image processing infrastructure to produce the Legacy Survey of Space and Time Open
The Vera C. Rubin Observatory is preparing to execute the most ambitious astronomical survey ever attempted, the Legacy Survey of Space and Time (LSST). Currently the final phase of construction is under way in the Chilean Andes, with the …
View article: panda-k8s
panda-k8s Open
This is the repository for "panda-k8s" developed by SLAC National Accelerator Laboratory.
View article: FTS3: Data Movement Service in containers deployed in OKD [Slides]
FTS3: Data Movement Service in containers deployed in OKD [Slides] Open
The File Transfer Service (FTS3) is a data movement service developed at CERN which is used to distribute the majority of the Large Hadron Collider's data across the Worldwide LHC Computing Grid (WLCG) infrastructure. At Fermilab, we have …
View article: FTS3: Data Movement Service in containers deployed in OKD
FTS3: Data Movement Service in containers deployed in OKD Open
The File Transfer Service (FTS3) is a data movement service developed at CERN which is used to distribute the majority of the Large Hadron Collider's data across the Worldwide LHC Computing Grid (WLCG) infrastructure. At Fermilab, we have …
View article: FTS3: Data Movement Service in containers deployed in OKD [Slides]
FTS3: Data Movement Service in containers deployed in OKD [Slides] Open
The File Transfer Service (FTS3) is a data movement service developed at CERN which is used to distribute the majority of the Large Hadron Collider's data across the Worldwide LHC Computing Grid (WLCG) infrastructure. At Fermilab, we have …
View article: LHC Data Storage: Preparing for the Challenges of Run-3
LHC Data Storage: Preparing for the Challenges of Run-3 Open
The CERN IT Storage Group ensures the symbiotic development and operations of storage and data transfer services for all CERN physics data, in particular the data generated by the four LHC experiments (ALICE, ATLAS, CMS and LHCb). In order…
View article: The ATLAS Data Carousel Project Status
The ATLAS Data Carousel Project Status Open
The High Luminosity upgrade to the LHC, which aims for a tenfold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29 and will deliver an unprecedented volume of scientifi…
View article: FTS3: Data Movement Service in containers deployed in OKD
FTS3: Data Movement Service in containers deployed in OKD Open
The File Transfer Service (FTS3) is a data movement service developed at CERN which is used to distribute the majority of the Large Hadron Collider’s data across the Worldwide LHC Computing Grid (WLCG) infrastructure. At Fermilab, we have …
View article: NOTED: a framework to optimise network traffic via the analysis of data from File Transfer Services
NOTED: a framework to optimise network traffic via the analysis of data from File Transfer Services Open
Network traffic optimisation is difficult as the load is by nature dynamic and seemingly unpredictable. However, the increased usage of file transfer services may help the detection of future loads and the prediction of their expected dura…
View article: FTS improvements for LHC Run-3 and beyond
FTS improvements for LHC Run-3 and beyond Open
The File Transfer Service (FTS) developed at CERN and in production since 2014, has become a fundamental component for the LHC experiments and is tightly integrated with experiment frameworks. Starting from the beginning of 2018 with the p…
View article: MONIT: Monitoring the CERN Data Centres and the WLCG Infrastructure
MONIT: Monitoring the CERN Data Centres and the WLCG Infrastructure Open
The new unified monitoring architecture (MONIT) for the CERN Data Centres and for the WLCG Infrastructure is based on established open source technologies to collect, stream, store and access monitoring data. The previous solutions, based …
View article: CERNBox: the CERN cloud storage hub
CERNBox: the CERN cloud storage hub Open
CERNBox is the CERN cloud storage hub. It allows synchronizing and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) aiming to provide universal access and offline availability to any data store…
View article: Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue Open
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres…
View article: Unified Monitoring Architecture for IT and Grid Services
Unified Monitoring Architecture for IT and Grid Services Open
This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as F…
View article: Kibana, Grafana and Zeppelin on Monitoring data
Kibana, Grafana and Zeppelin on Monitoring data Open
Project Specification The goal of this project is to investigate different solutions and develop some typical monitoring displays, such as the general IT Department overviews and service-specific dashboards of data provided by the IT group…
View article: AsyncStageOut: Distributed user data management for CMS Analysis
AsyncStageOut: Distributed user data management for CMS Analysis Open
AsyncStageOut (ASO) is a new component of the distributed data analysis system of CMS, CRAB, designed for managing users' data. It addresses a major weakness of the previous model, namely that mass storage of output data was part of the jo…
View article: WLCG Monitoring Consolidation and further evolution
WLCG Monitoring Consolidation and further evolution Open
The WLCG monitoring system solves a challenging task of keeping track of the LHC computing activities on the WLCG infrastructure, ensuring health and performance of the distributed services at more than 170 sites. The challenge consists of…
View article: AGIS: Evolution of Distributed Computing information system for ATLAS
AGIS: Evolution of Distributed Computing information system for ATLAS Open
The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS softwa…
View article: Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage
Comprehensive Monitoring for Heterogeneous Geographically Distributed Storage Open
Storage capacity at CMS Tier-1 and Tier-2 sites reached over 100 Petabytes in 2014, and will be substantially increased during Run 2 data taking. The allocation of storage for the individual users analysis data, which is not accounted as a…
View article: gLExec Integration with the ATLAS PanDA Workload Management System
gLExec Integration with the ATLAS PanDA Workload Management System Open
ATLAS user jobs are executed on Worker Nodes (WNs) by pilots sent to sites by pilot factories. This paradigm serves to allow a high job reliability and although it has clear advantages, such as making the working environment homogeneous, t…
View article: Processing of the WLCG job monitoring data using ElasticSearch
Processing of the WLCG job monitoring data using ElasticSearch Open
<p>Abstract</p>\n\n<p>The Worldwide LHC Computing Grid (WLCG) includes more than 170 grid and cloud computing centres in 40 countries. More than 2 million computational jobs are being executed on a daily basis and petabyt…