PHILOSOPHICAL ASPECTS OF SIMULATIONS IN COSMOLOGY: CONFERENCE SCHEDULE & ABSTRACTS
|DAY 1:||DAY 2:|
|SESSION 1 (phys)||9:00 – 9:55||Santi Roca-Fabrega||Desika Narayanan|
|5 minute break|
|SESSION 2 (phil)||10:00 – 10:55||Marie Gueguen||Claus Beisbart|
|5 minute break|
|SESSION 3 (phys)||11:00 – 11:55||Rainer Weinberger||Francisco Villaescusa-Navarro|
|12:00 – 13:00 Lunch break|
|DISCUSSION||13:00 – 14:00||Panel Discussion||Panel Discussion|
Abstracts and Titles:
Day 1: Monday, August 9
Comparison projects, the essential (but usually forgotten) step in computational astrophysics
9:00-9:55 am PDT, August 9
In the last two decades, researchers in most fields in astrophysics have adopted numerical simulations as a new tool for their daily science. Nowadays, computational models play an important role in gaining new knowledge about astrophysical processes that can only be partially studied from available observations. In the last years, the fast development of computer technology allowed researchers to increase the quality of simulations, and thus, to reduce the non-physical results resulting from the lack of temporal, spatial, and mass resolutions. In this process, many research groups around the globe created their numerical codes to attack the same science case. Each group followed slightly different approaches when creating the new codes, and some of those diverged one from another. Some of them were quickly made publicly available, and as a consequence, it was created an active community of developers that are continuously adding new physics to reproduce the observations better. At this point, some of the used numerical approaches highly diverged from the others, but, surprisingly, most of them successfully reproduce observed properties in external galaxies. The reason behind this situation is that the problem of galaxy formation and evolution, and in particular the physics that are not simulated self-consistently (sub-grid physics), is highly degenerated, so it can be solved by taking many different approaches. Is in the context of highly degenerated astrophysical science cases that, from the very early development of the first simulations, some groups pointed out the need of making systematic comparisons among numerical codes. These comparisons should be designed to help researchers learn which results depend on the numerical implementation and which ones are robust. However, due to the technical and logistical complexity of these big comparison projects, only a few attempts have been made since the rise of computational astrophysics. Nowadays, a detailed comparison among all the state-of-the-art numerical codes is an almost impossible task due to the big differences among codes, and to the lack of interest of the community in these long and scientifically unproductive projects. But, it is still needed. In this talk, I will show that some collaborative projects are still working on making these comparisons. Researchers from many institutions around the globe are working on creating strategies that will allow researchers to make an “as fair as possible” comparison among numerical codes by previously calibrating their models. Although time-consuming, these calibrations showed to be compulsory to reduce the number of free parameters to be accounted for, and thus, to allow researchers to get real insight on which results are dominated by numerics and which ones are robust on the used numerical code.
A Tension Within Code Comparisons: Comparability or Diversity?
10:00-10:55 am PDT, August 9
The purpose of structure formation simulations in astrophysics is extremely ambitious. Their goal, indeed, is to extract predictions from a model whose physics is not well-understood and the nature of its main components not well-constrained. Not only simulations are supposed to allow us to understand what the standard cosmological model implies with respect to structure formation, but they must also constrain the nature of dark matter itself through the comparisons of their outcomes to observations, even though no experiment and no theoretical guidelines can be exploited as benchmarks for assessing the credibility of the simulations’s results. Hence the need to develop methodologies allowing to determine when the result of the simulation can be trusted. Code comparisons, which compare the outcomes of simulations based on different codes, seem at first sight a natural candidate for such a role. They seem to test the dependence of the simulation’s outcome upon the specifics of different codes and their robustness to a variation in their parametrization. In this paper, I will argue however that a tension within code comparisons in astrophysics prevents them to constitute an instance of robustness analysis, for the effort to make targets comparable undermines the preservation of the diversity needed for robustness analysis. Upon analyzing the methodology of projects such as AQUILA and AGORA, I will discuss how code comparisons still have an important role to play in constraining such simulations and highlight where their contributions seem of the utmost importance.
Uncertainties in cosmological simulations of galaxy formation
11:00-11:55 am PDT, August 9
Simulations play a key role in modern theoretical astrophysics research. In particular their ability to model complex systems opens up unprecedented opportunities to connect fundamental laws of physics to astronomical observations. Yet, the increasing complexity of these simulations gives rise to a number of conceptual difficulties: accurate uncertainty estimation becomes difficult and method verification problem dependent. Using modern cosmological simulations of galaxy formation as an example, I will discuss how these uncertainties arise and how they can be quantified. I will particularly focus on the non-linear nature of ‘feedback’ in galaxy formation simulations, and argue that the study of modeling uncertainties needs to be part of the study of the system itself, and outline different ways how this can be done in practice. Finally, I will conclude by discussing the implications for the present and future use of cosmological simulations in cosmology and galaxy formation research.
Day 2: Tuesday, August 10
Building a House of Cards: Developing a Quantitative Synergy between Theory and Observations
9:00-9:55 am PDT, August 10
With the foundational role that cosmological simulations have played in forwarding our understanding of galaxy evolution over a Hubble time, it has become increasingly important to test these simulations against observations in an apples-to-apples manner. In the last decade, a number of techniques have emerged to quantitatively connect the physical properties of galaxies that these simulations predict with bona fide observations. In this talk, I will review the techniques that underpin this “forward modeling” of galaxy evolution simulations, and discuss the physical uncertainties that dominate at nearly every stage of these processes.
Why trust cosmological simulations? A closer look from the V&V perspective
10:00 – 10:55 am PDT, August 10
Computer simulations play an important role in current cosmology. In particular, they are used to trace the evolution of dark matter and visible galaxies for alternative cosmological models. Some problems for the currently popular concordance model, e.g., the core cusp problem, crucially involve simulations.
But why should we trust such simulations? Practicing simulation scientists often answer questions of this type by referring to a battery of tests to which they subject their simulations. These tests are summarized under the label “V&V” for verification and validation. Very recently, the methods of V&V have drawn the attention of philosophers of science.
The aim of this talk is to bring the philosophical literature on V&V to bear on cosmological simulations. To this purpose, I discuss the simulations in view of the so-called Sargent cycle. I argue that the verification of cosmological simulations is essential, which is evident from the fact that potential numerical artefacts are an important issue. I further analyze the tests that are used to achieve verification of the simulations and discuss the relation to the validation of the conceptual model. A main result from my analysis is that the difficulty to secure data of cosmological import is not a severe problem for the credibility of cosmological simulations.
Can we trust predictions from super-intelligent machines?
11:00 – 11:55 am PDT, August 10
Machine learning is emerging as a very powerful tool to solve difficult problems in many different disciplines. One of its applications is to create a cosmic translator that takes as input data from cosmological surveys and returns the value of the fundamental parameters describing the laws and constituents of our Universe. To train this translator, the usage of cosmological simulations is needed. In this talk I will describe in detail how this task can be carried out and what kind of simulations are needed for it. I will then present the potential problems of this approach as a challenge to the community.