Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fix the redshift when using the prospector-beta prior #351

Open
Abrygzy opened this issue Oct 13, 2024 · 6 comments
Open

How to fix the redshift when using the prospector-beta prior #351

Abrygzy opened this issue Oct 13, 2024 · 6 comments

Comments

@Abrygzy
Copy link

Abrygzy commented Oct 13, 2024

Hi,

I want to use the prospector-beta priors (Wang, Leja, et al., 2023, ApJL.) with fixed redshift. Specifically, I want to use the 'PhiSFHfixZred' prior to fix the fitting at a given redshift. But I found the redshift (or the nzsfh_1 parameter) changes during the fit. Also, sometimes the fitting process returns some warnings and errors from the redshift-related calculation in astropy. I have tried these two methods to use this 'PhiSFHfixZred' prior (I only show some parameters related to SFH and redshift, basically shamelessly stolen from Dr. Bingjie Wang's great repo https://github.com/wangbingjie/prospector_catalog):

   model_params['nzsfh'] = {'N': 9,
                            'isfree': True,
                            'init': np.array([3, 11,0.0, 0,0,0,0,0,0]),
                            'prior': PZ.PhiSFHfixZred(zred=3.0,
                                              mass_mini=6.0, mass_maxi=12.5,
                                              z_mini=-1.98, z_maxi=0.19,
                                              logsfr_ratio_mini=-5.0, logsfr_ratio_maxi=5.0,
                                              logsfr_ratio_tscale=0.3, nbins_sfh=7,
                                              const_phi=True),
                            'init_disp': np.array([0.1, 0.5, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3])}
   model_params['zred'] = {'N': 1, 'isfree': False,
                           'depends_on': transforms.nzsfh_to_zred,
                           'init': 3.0}
   nbins_sfh = 7
   model_params["sfh"] = {'N': 1, 'isfree': False, 'init': 3}
   model_params['logsfr_ratios'] = {'N': nbins_sfh-1, 'isfree': False,
                                    'init': 0.0,
                                    'depends_on': transforms.nzsfh_to_logsfr_ratios,
                                    'prior': priors.StudentT(mean=np.full(nbins_sfh-1,0.0),
                                             scale=np.full(nbins_sfh-1,0.3),
                                             df=np.full(nbins_sfh-1,2))}

   model_params["mass"] = {'N': 7,
                           'isfree': False,
                           'init': 1e6,
                           'units': r'M$_\odot$',
                           'depends_on': logsfr_ratios_to_masses}

I also tried to fix the 'zred' without any dependence:

   model_params['zred'] = {'N': 1, 'isfree': False,
                            'init': 3.0}

Some of the related astropy errors look like this:

[/Users/abry/micromamba/envs/py312_mamba/lib/python3.12/site-packages/astropy/cosmology/flrw/base.py:1072](https://file+.vscode-resource.vscode-cdn.net/Users/abry/micromamba/envs/py312_mamba/lib/python3.12/site-packages/astropy/cosmology/flrw/base.py:1072): IntegrationWarning: The integral is probably divergent, or slowly convergent.
  return quad(self._lookback_time_integrand_scalar, z, inf)[0]

[/Users/abry/micromamba/envs/py312_mamba/lib/python3.12/site-packages/astropy/cosmology/flrw/base.py:1072](https://file+.vscode-resource.vscode-cdn.net/Users/abry/micromamba/envs/py312_mamba/lib/python3.12/site-packages/astropy/cosmology/flrw/base.py:1072): IntegrationWarning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved.  The error may be underestimated.  
return quad(self._lookback_time_integrand_scalar, z, inf)[0]

[/Users/abry/micromamba/envs/py312_mamba/lib/python3.12/site-packages/astropy/cosmology/flrw/base.py:1072](https://file+.vscode-resource.vscode-cdn.net/Users/abry/micromamba/envs/py312_mamba/lib/python3.12/site-packages/astropy/cosmology/flrw/base.py:1072): IntegrationWarning: The algorithm does not converge.  Roundoff error is detected in the extrapolation table.  It is assumed that the requested tolerance cannot be achieved, and that the returned result (if full_output = 1) is the best which can be obtained.
  return quad(self._lookback_time_integrand_scalar, z, inf)[0]

Here are my questions:

  1. How to properly fix the redshift with the 'PhiSFHfixZred' prior? Or should I just ignore these warnings?
  2. Are there other parameters I need to change when using these beta priors?
  3. Is it possible to parallelize the fitting when using the beta priors? I know I can use different cores for different logzsol sps models. But in the prospector-beta priors, the stellar metallicity is not a free parameter.

Thanks!
Zeyu

@jrleja
Copy link
Collaborator

jrleja commented Oct 13, 2024

@wangbingjie this one might be for you : )

@wangbingjie
Copy link
Contributor

Hum, the first snippet that you sent looks correct -- zred should be fixed to your input value... 'depends_on': transforms.nzsfh_to_zred needs to be in model_params['zred'] because -- well -- zred depends on this transformation (this function simply separates the redshift from the N=9 vector from model_params['nzsfh']). Could I ask if you are just using my parameter file?

As a temporary solution, you could try using the prior PhiSFH, but letting redshift to vary slightly, the range of which can be determined from its measurement uncertainty; this way the redshift uncertainty also gets properly propagated into the rest of the inferred parameters.
I usually impose an error floor (np.clip(zspec_unc, a_min=0.05, a_max=None)) to ensure that the integral is computed without raising any errors.

Let me know if you have further questions!

@Abrygzy
Copy link
Author

Abrygzy commented Oct 14, 2024

Hum, the first snippet that you sent looks correct -- zred should be fixed to your input value... 'depends_on': transforms.nzsfh_to_zred needs to be in model_params['zred'] because -- well -- zred depends on this transformation (this function simply separates the redshift from the N=9 vector from model_params['nzsfh']). Could I ask if you are just using my parameter file?

As a temporary solution, you could try using the prior PhiSFH, but letting redshift to vary slightly, the range of which can be determined from its measurement uncertainty; this way the redshift uncertainty also gets properly propagated into the rest of the inferred parameters. I usually impose an error floor (np.clip(zspec_unc, a_min=0.05, a_max=None)) to ensure that the integral is computed without raising any errors.

Let me know if you have further questions!

Thanks for your kind and rapid reply! I'm trying to repeat the SED fitting part in Dr. Weichen Wang's recent work (https://arxiv.org/pdf/2409.17956), in which he used all the parameter settings in your recent work (https://arxiv.org/pdf/2310.01276) except for the time bin setting and dust attenuation. I use the params_parrot_phisfhfixzred function in the params_prosp_parrot.py as the model. I made the following adjustments:

  1. I change the time bin setting following Weichen's work, and fix the dust power-index to 0.7 for the CF00 extinction law;
  2. Since they only have rest-frame optical to near-IR data, I fix all dust emission-related settings to default;
  3. I am using FSPS;
  4. I fix all AGN-related parameters to default;
  5. I use emcee instead of dynesty because I found dynesty runs extremely slow when only using single-core.

Thank you again for releasing this super helpful parameter file! Please let me know if papers other than Wang, Leja, et al. and 2023, ApJL. and Wang, Leja, et al., 2024, ApJS. should be cited when using these parameter settings.

I still have a naive question, where should I pass this error floor to? If I have a redshift measurement error of dz, should I set zred_mini=z - dz, zred_maxi=z + dz in the nzsfh prior PhiSFH after clipping it?

@wangbingjie
Copy link
Contributor

Sorry I've been busy with Webb proposals. I was suggesting something like

obs['zspec'] = z_spec
obs['zspec_16'] = z_spec16 # 16th percentiles
obs['zspec_84'] = z_spec84
obs['zspec_16_clip'] = np.min( [obs['zspec_16'], obs['zspec']-0.05] )
obs['zspec_84_clip'] = np.max( [obs['zspec_84'], obs['zspec']+0.05] )

then pass to the prior:
zred_mini=obs['zspec_16_clip'], zred_maxi=obs['zspec_84_clip']

This is just to ensure that the integral being calculated in the prior is always computed without errors. Let me know if you run into further problems in reproducing the fit from that paper!

@Abrygzy
Copy link
Author

Abrygzy commented Oct 17, 2024

Sorry I've been busy with Webb proposals. I was suggesting something like
obs['zspec_16'] = z_spec16 # 16th percentiles
obs['zspec_84'] = z_spec84
obs['zspec_16_clip'] = np.min( [obs['zspec_16'], obs['zspec']-0.05] )
obs['zspec_84_clip'] = np.max( [obs['zspec_84'], obs['zspec']+0.05] )

Thank you so much! Good luck with your proposals! I did a test run and found it works correctly. I still have two questions:

  1. There seems to be a bug when using MCMC sampler and prospector-beta prior at the same time. I found the emcee calls the theta_bounds() function of the ProspectorParams class in the models.parameters.py, which should return the parameter range for each free parameter. However, the 'nzsfh' is recognized as 3 or 4 single parameters (depending on whether the redshift is fixed), so the current theta_bounds() will just return 3 or 4 tuples, which mismatches the size of 'nzsfh' (2 or 3 plus the sfh ratio number). I made a simple revision to make it work:
    def theta_bounds(self):
        """Get the bounds on each parameter from the prior.

        :returns bounds:
            A list of length ``ndim`` of tuples ``(lo, hi)`` giving the
            parameter bounds.
        """
        bounds = np.zeros([self.ndim, 2])
        for p, inds in list(self.theta_index.items()):
            pb = self.config_dict[p]['prior'].bounds()
            ###################################################
            if p == 'nzsfh':
                bounds[inds, :] = np.array([
                    [pb[0], pb[1], pb[2]] + [pb[3]]*6
                ])
            else:
                bounds[inds, :] = np.array(pb).T
            ###################################################
            # bounds[inds, :] = np.array(pb).T
        # Force types ?
        bounds = [(np.atleast_1d(a)[0], np.atleast_1d(b)[0])
                  for a, b in bounds]
        return bounds

This is a stupid revision since it only works with the 'PhiSFH' prior and 6 sfr ratios. I'm sure that you have much smater way to fix this problem.
2. I really want to use the dynasty sampler since it provides a much better estimation. But I find it extremely slow when using beta prior and single-core. Is it possible to speed up by parallelization? Or should I use emulators instead of FSPS just like your parameter files?

Thanks!
Zeyu

@wangbingjie
Copy link
Contributor

  1. Ah thanks for flagging the bug! I generally use dynesty, which does not call theta_bounds, and I only tested running mcmc on the full p-beta priors and so didn't catch this bug... sorry about that!
  2. The emulator currently only works for photometry. Since you are fitting a spectrum at the same time, fsps is unfortunately the only option... What do you mean by "extremely slow"? It is normal to take > 1 day for those kind of fits on a single core (with a ~5% sampling efficiency if I remember correctly). Parallelizing dynesty is certainly possible (for this you'd need to check the dynesty page). I never bothered to do it because the scaling is not as good as MCMC, where the chains are independent of each other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants