Scientific problems driving cloud and edge computing use a number of techniques such as filtering, combinatorial optimization, agent-based modelling, massive data analysis, embarassignly parallel simulations or machine learning. The sheer size, complexity and volume of data collected, combined with the need for faster processing are pushing the challenges on the underlying infrastructure to extremes. It is natural in such circumstances to turn to HPC in order to import knowledge and practices, and possibly to adopt it when it makes sense.
However, there are a number of barriers in targeting an HPC system: for example, building effective containers with appropriate support for co-processors and a to use efficiently low-latency network technology is not a task most users are willing to embark on. Similarly, application software available via repositories is rarely mainstream in a managed HPC environment. On the other hand, modelling and evaluation techniques (both analytical and simulation based) from such scientific applications may bring
great advances to the design and the implementation to complex HW/SW architectures: for example, data structure and density are very different to traditional HPC workloads and this may have an impact on the processing units.
The aim of this session is to explore how HPC and Cloud technologies interact today in order to address these challenges and how they will probably evolve. The session will be comprised of presentations from leading scientists and engineers coming from Earth Observations and Modelling, Seismology, High Energy Physics, Astrophysics and large scale Industrial Monitoring. We will focus on a number of topics including: applications of computing platforms (edge computing, fog computing, cloud computing, distributed memory, shared computing resources); novel data storage architecture; data harvesting and analytics in situ to prepare for Cloud and HPC.