S. Vallero
YOU?
Author Swipe
View article: Computing Challenges for the Einstein Telescope project
Computing Challenges for the Einstein Telescope project Open
The discovery of gravitational waves, first observed in September 2015 following the merger of a binary black hole system, has already revolutionised our understanding of the Universe. This was further enhanced in August 2017, when the coa…
View article: interTwin D4.2 First Architecture design of the DTs capabilities for High Energy Physics, Radio astronomy and Gravitational-wave Astrophysics
interTwin D4.2 First Architecture design of the DTs capabilities for High Energy Physics, Radio astronomy and Gravitational-wave Astrophysics Open
This deliverable describes the capabilities that the architecture design of a Digital Twin Engine (DTE) has to provide in order to be able to support the use cases coming from the High Energy Physics, Radio Astronomy and GW Astrophysics an…
View article: interTwin D7.2 Report on requirements and thematic modules definition for the physics domain first version
interTwin D7.2 Report on requirements and thematic modules definition for the physics domain first version Open
interTwin co-designs and implements the prototype of an interdisciplinary Digital Twin Engine (DTE). The developed DTE will be an open source platform that includes software components for modelling and simulation to integrate application-…
View article: OpenForBC, the GPU partitioning framework
OpenForBC, the GPU partitioning framework Open
In recent years, compute performances of GPUs (Graphics Processing Units) dramatically increased, especially in comparison to those of CPUs (Central Processing Units). GPUs are nowadays the hardware of choice for scientific applications in…
View article: Delivering a machine learning course on HPC resources
Delivering a machine learning course on HPC resources Open
In recent years, proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and …
View article: Managing a heterogeneous scientific computing cluster with cloud-like tools: ideas and experience
Managing a heterogeneous scientific computing cluster with cloud-like tools: ideas and experience Open
Obtaining CPU cycles on an HPC cluster is nowadays relatively simple and sometimes even cheap for academic institutions. However, in most of the cases providers of HPC services would not allow changes on the configuration, implementation o…
View article: Fair Share Scheduler for OpenNebula
Fair Share Scheduler for OpenNebula Open
A small Cloud infrastructure for scientific computing likely operates in a saturated regime, which imposes to optimize the allocation of resources. Tenants typically pay a priori for a fraction of the overall resources. Within this busines…
View article: HPC4AI
HPC4AI Open
In April 2018, under the auspices of the POR-FESR 2014-2020 program of Italian Piedmont Region, the Turin's Centre on High- Performance Computing for Artificial Intelligence (HPC4AI) was funded with a capital investment of 4.5Me and it beg…
View article: INDIGO-DataCloud: A data and computing platform to facilitate seamless access to e-infrastructures.
INDIGO-DataCloud: A data and computing platform to facilitate seamless access to e-infrastructures. Open
This paper describes the achievements of the H2020 project INDIGO-DataCloud. The project has provided e-infrastructures with tools, applications and cloud framework enhancements to manage the demanding requirements of scientific communitie…
View article: Plancton: an opportunistic distributed computing project based on Docker containers
Plancton: an opportunistic distributed computing project based on Docker containers Open
The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worke…
View article: Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers
Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers Open
Trabajo presentado a: 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP2016) 10–14 October 2016, San Francisco.
View article: Geographically distributed Batch System as a Service: the INDIGO-DataCloud approach exploiting HTCondor
Geographically distributed Batch System as a Service: the INDIGO-DataCloud approach exploiting HTCondor Open
One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system adm…
View article: A FairShare Scheduling Service for OpenNebula
A FairShare Scheduling Service for OpenNebula Open
In the ideal limit of infinite resources, multi-tenant applications are able to scale in/out on a Cloud driven only by their functional requirements. While a large Public Cloud may be a reasonable approximation of this condition, small sci…
View article: A Web- and Cloud- based Service for the Clinical Use of a CAD (Computer Aided Detection) System - Automated Detection of Lung Nodules in Thoracic CTs (Computed Tomographies)
A Web- and Cloud- based Service for the Clinical Use of a CAD (Computer Aided Detection) System - Automated Detection of Lung Nodules in Thoracic CTs (Computed Tomographies) Open
M5L, a Web-based Computer-Aided Detection (CAD) system to automatically detect lung nodules in thoracic Computed Tomographies, is based on a multi-thread analysis by independent subsystems and the combination of their results. The validati…