Calibration

From HR5
Jump to: navigation, search

--Bkgibson (talk) 07:20, 6 April 2017 (UTC)

Parameter Studies

In trying to identify the same optimal parameters for what we did in MaGICC (e.g. http://adsabs.harvard.edu/abs/2012MNRAS.424.1275B, http://adsabs.harvard.edu/abs/2011MNRAS.415.1051B, http://adsabs.harvard.edu/abs/2012MNRAS.419..771B, http://adsabs.harvard.edu/abs/2012MNRAS.425.1270S), we ran >100 realisations of the same galaxy, varying density thresholds for star formation, star formation efficiencies, feedback efficiencies, etc, focussing in on a locus of values which by redshift 1-2 allowed us to see if the rotation curve was looking flat, rather than highly peaked due to overly unrealistic inner star formation rates … that was a lot of runs, but again, just going to z~1-2 gave one a feel for whether we were ‘close’. We then targeted those parameters and ran *those* particular ones to z~0. In the end, we locked down a threshold (resolution-dependent) and a feedback efficiency (for a given IMF) and varied the star formation until one calibrating galaxy sat on the stellar mass-halo mass relation (e.g. Moster et al 2010; Guo et al 2010). That set our parameters in stone for the basic L* resolution (which was only ~330pc) runs, and they were never changed. This was sufficient for the ~2 orders of magnitude in stellar mass we covered in MaGICC (e.g. Fig 7a of http://adsabs.harvard.edu/abs/2012MNRAS.424.1275B), but would not be sufficient at either the higher or lower mass range though, as we almost certainly need a mass-dependent efficiency factor at the lower mass end, and the lack of AGN at the higher mass end in MaGICC limits it to sub-L* or so.

What I would strongly suggest is that the ultimate goal for the parameter studies should be to identify the optimal set of parameters for both the primary production run, whatever that might be, but also for parallel runs with increasing levels of refinement. Let’s say the base production run was at 1kpc, matching Eagle, Magneticum (uhr), and Illustris, more or less … I would like to see us identify the optimal parameters for 500pc, 250pc, 125pc, and maybe even 60pc… :) … having all those means we can be running across multiple platforms, not only on the primary mega-cuboid, but also on individual zooms on ellipticals, clusters, groups, and disk.

Philosophically, having that ambitious framework in place (e.g. see the Simulations table for Magneticum Pathfinder and Magneticum at http://www.magneticum.org/simulations.html ), without losing focus of your plan to cover the largest volume imaginable, will provide great added value.


Competitors and How they Calibrate

MaGICC - calibration locked to the M*-Mhalo relation of Moster et al (2010) & Guo et al (2010) .. getting the baryon fraction correct was the unique characterisitc of MaGICC, and some thing that had done been done successfully previously (FIRE, etc, now do the same) … having done that, we found that the 8 galaxy scaling relations were all recovered (Fig 7 of http://adsabs.harvard.edu/abs/2012MNRAS.424.1275B), as were key CGM characteristics (e.g. http://adsabs.harvard.edu/abs/2012MNRAS.425.1270S) .. followup papers looked at the LSS, correlation functions, and luminosity functions, but the sole calibrator employed was M*-Mhalo … i.e., MaGICC is not a large cosmological run which was tuned to, say, luminosity functions or mass-metallicity relations or the colour-magnitude relation … they were secondary things against which we compared, but were not the primary calibrator!

Magneticum - the team has never published the specific empirical relations that their models were constrained to recover … there are plenty of papers from the team showing agreement with various xray and optical relations for clusters, SZ clusters, etc, but *exactly* how the efficiency and parameter values were “tuned” has not been shown yet … some combination of what Illustris and Eagle do, I would guess, given the people involved

Eagle - from http://adsabs.harvard.edu/abs/2015MNRAS.446..521S, the parameters were calibrated on recovering (1) galaxy stellar mass function, (2) stellar mass-halo mass relation (which was what we employed in MaGICC), (3) galaxy size - stellar mass relation, and (4) black hole - stellar mass relation ; once calibrated on those 4 primary ones, they examine any number of secondary relations (but do not calibrate on them), including Tully-Fisher relation, specific star formation rate - stellar mass relation, mass-metallicity relation, x-ray temperature - temperature relation for clusters, gas fraction - total mass relation (which it fails on, unlike MaGICC… :) …), IGM column dnesities of CIV amd OVI

Illustris - from http://adsabs.harvard.edu/abs/2014MNRAS.444.1518V, the parameters were calibrated on recovering (1) cosmic star formation rate density vs redshift, and (2) stellar mass - halo mass relation (as we did in MaGICC and as Eagle did amongst their 4 calibrators). After calibrating on those 2 relations, they applied it to all the usual ones, including all those listed under Eagle …


Summary and Thoughts Re:Calibration

1. The philosophies above boil down to (a) two-phase: calibrating on galaxy properties for zeroth-order, followed by stellar halo-halo mass to lock parameters down (MaGICC), or (b) one-phase: calibrating on stellar mass - halo mass (Eagle & Illustris) alongside 1 to 3 other relations (1 for Illustris; 3 for Eagle). (a) is preferred, I think, if we are reaching <~500pc; at 1-2 kpc, then (b) is fine. If building parameter files for higher resolution / more levels of refinement

2. It is clear that the stellar halo - halo mass relation must be a primary calibrator employed, regardless of resolution; at higher resolution, I would also suggest adding the Schmidt Law (as Agertz et al have used indirectly), but at lower resolution, it is perhaps less important. The big question is whether a Bayesian like optimisation across all four of: stellar halo - stellar mass, black hole - stellar mass, size - stellar mass, and stellar mass function, would be best .. probably, but obviously more complex (but then Eagle did it to its advantage over Illustris).

3. Need to think in advance about the data management / provision … the real value added by Eagle, Illustris, Millennium etc, is the provision of reduced data in useful and palatable formats … a beautiful and functional interface, data products that people want, etc, all tied to an initial Nature paper trumpeting the largest simulation ever undertaken and all that good stuff! :)

4. Final philosophical thoughts .. given Magneticum, Eagle, and Illustris, how HR-5 will separate itself is probably something you have already thought about. If it is the long dimension providing the longest waves coupled with recovering more mass clusters, then that is cool. I do think that in terms of making the biggest impact, pushing the force resolution beyond the ~1kpc of Eagle would be important … going one more level of refinement and reaching 500pc opens up a world more possibility (with resolutions more or less the same as we have in the RaDES sample: http://adsabs.harvard.edu/abs/2012A%26A...547A..63F) … allows one to do some internal chemistry/kinematics for ~L* galaxies at least, in addition to the primary cosmological goals … but until we know more about the scalability and infraastructure, it might be hard to commit to that level of resolution, I suspect.




Initial Sanity Checks (In progress)

  1. Public version
  2. AGN only version
  3. RAMSES+AGN+CH without the chemistry switched on
  4. RAMSES+AGN+CH with everything switched on


Specimen parameter file: File:Parameter chemomodif.nml