-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Effective exposure times of backup program #2370
Conversation
I recommend taking another look at PR #1815 as part of this. |
Yes, I thanks, I was aware of the other PR, but since it was a one-liner PR (like this one ATM), I decided to open a separate one. But I agree this should supersede #1815. Following up on specific 1815 discussions From my understanding when we update the nominal flux we need to rerun the: (i.e. #1274 and |
I ve to carefully read the -- long and rich! -- slack thread about that, so sorry if I m adding noise here. but from what I understood from skimming it, the goal is to have the backup tiles observed to a given efftime_spec, with discarding the ebv factor, right? if so, I kind of feel that it may be better to tackle that at the per-{exposure,tile} information, rather than at the per-fiber, no? one would have to double-check, but I think that the offline efftime_spec info used by nts is the per-tile one. desispec/py/desispec/tile_qa.py Lines 337 to 342 in 37e454d
and it is a combination of the information in per-exposure level: desispec/py/desispec/tile_qa.py Lines 277 to 281 in 37e454d
the script for the per-exposure if exposure_qa.py. what I suggest would be to add few lines in tile_qa.py - and possibly the same in exposure_qa.py, after the per-{exposure,tile} computation, saying "if (main?) backup tile, then remove (or add?) the ebv factor". would that make sense? |
Thanks for the comment @araichoor ! I personally don't have an opinion on 'riskiness' of the change as I am less familiar with operational side of things. From user standpoint, it obviously would be good if the TSNR2/EFFTIMEs for individual fibers were consistent with what is done when planning the observations, but maybe that's not really achievable, given that we are potentially planning to make the change in the calculation of those for backup, so there will be always inconsistency for past vs new observations. |
Thanks for continuing the slack thread here.
To be explicit: since we limit the backup program to visits under 10
minutes, and since we probably should have a minimum around 5 minutes (FP
thermal, although that might have gotten better with the new firmware),
there's not a lot of gain to be had in adapting the visit exposure time.
The main thing is to stop revisiting fields when we should be marking them
done!
And the problem is that in conditions of good transparency and seeing, but
bright sky, as can happen in twilight or full moon, we are badly
over-exposing.
Regarding the brighter template, here's my thinking: t_effective involves
terms that affect signal (seeing and transparency) and terms that involve
noise (sky and object photons).
For the galaxy programs we use a background-limited effective time. Call
that t_back.
If the sky was negligible, then we'd be photon-limited. Call that t_photon.
The challenge is that the backup program has a wide range of conditions.
Sometimes the signal is diminished, while other times the sky is bright.
E.g., if the BGS program is observing with 25% survey speed, if the signal
is 4x low, then t_photon = t_back. But if the noise is 4x high, then the
bright stars are getting t_photon = 4*t_back. Which could be ~12 minutes.
Now if the Backup program is observing with 5% survey speed, then we could
range from t_photon = t_back (bad seeing/transparency) to t_photon =
20*t_back (bright sky, assuming very bright target). Our 10 minute visit
with t_back = 30 sec (since 5% speed) could be giving anywhere from 30
seconds to 10 minutes of t_photon.
Perhaps we'd like the Backup program to yield spectra at G=18 that are no
worse than the MWS spectra at G=19. For photon-limited performance, that
would require 2.5 times less t_photon. If we think that many MWS spectra
were taken in high-sky conditions, then the stars were getting t_photon =
12 minutes, which suggests that we should require t_photon > 5 minutes for
the Backup program.
That matches pretty well to the 5 minute minimum visit time. I.e., in very
high-sky but nominal seeing/transparency, we'd be done in 5 minutes (even
though t_back would be 15 seconds).
Meanwhile, it is true that if the signal is weak (bad seeing/clouds), even
10 minutes of time will only give <30 seconds of t_photon, which is only
10% completion. Those conditions are truly less useful for bright stars,
and we'll basically change the Backup survey to be not completing tiles in
periods of low signal, but rather to be waiting for periods of high
background. But we have lots of those periods: twilights and full moon.
To do otherwise would be to set a lower t_photon, so that one can
occasionally complete in times of low signal and will routinely overexpose
in times of high background. And that will apparently not get to SNR at
G=18 that keeps up with the MWS at G=19.
Obviously all of the above would need some tweaking to acknowledge the fact
that at G=18, the star is "only" 10-15 times brighter than the nominal
sky. So the correct ETC calculation, call it t_18, would not have quite as
extreme a behavior. We might conclude that we want t_18 > 180 seconds, for
instance.
Best,
Daniel
…On Wed, Sep 11, 2024 at 6:18 PM Sergey Koposov ***@***.***> wrote:
Thanks for the comment @araichoor <https://github.com/araichoor> !
One of the goals is indeed to remove the ebv factor, but another is to
make sure that the TSNR2 are computed for brighter 'templates'. (there
could be other things there as well, as I may not have a full understanding
of the issue)
I personally don't have an opinion on 'riskiness' of the change as I am
less familiar with operational side of things. From user standpoint, it
obviously would be good if the TSNR2/EFFTIMEs for individual fibers were
consistent with what is done when planning the observations, but maybe
that's not really achievable, given that we are potentially planning to
make the change in the calculation of those for backup, so there will be
always inconsistency for past vs new observations.
—
Reply to this email directly, view it on GitHub
<#2370 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAPQRGRZTC4TN56OF4CVF6LZWC6U7AVCNFSM6AAAAABOBZBT4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBUG44TSMBTGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I'm really embarrassed that I don't know what gpbbackup is. When do we set that? I'm not aware of any gpbbackup tiles in the tiles file. Reflecting a little more, maybe that's "GFA pass band" and is used for the offline EFFTIME estimates that are most turned to match what we get from the GFAs (i.e., for checking the results from the exposure time calculator). I think @sbailey or @akremin would be the best references on how to rerun the tsnr code for a handful of files to see what things look like. I think I would be in favor of changing it at the per-fiber level, and keeping the tile derived quantities as medians of the fiber derived quantities. The biggest issue with that approach would be that we would have some cases where the same target is observed in the backup and bright tiles, and has apparently lower EFFTIME in the bright tile and higher EFFTIME in the backup tile, but only because the EFFTIMEs mean something different. But I think that's okay because we are explicitly saying we want the EFFTIMEs to mean something different in these cases! I also think that you don't need to make "brighter templates"---IIRC the templates just care about the wavelength dependence and not the magnitude---and instead the earlier PR should be doing enough there, albeit after the calibration from TSNR2 to EFFTIME is changed. After we merge something like this we will definitely have a time in daily when past backup EFFTIMEs mean something different from new backup EFFTIMEs. We could consider updating the past backup EFFTIMEs, or just move forward and update them as part of the next DR. Daniel, I think I agree with everything you say, but I think all of the machinery is there to support what you want, and it's just a matter of picking the right source flux in the EFFTIME calculation---e.g., 18th mag, or 16th. I agree with you that this ends up being a trade with respect to how much effective time we get for the faint vs. the bright stars in different conditions, and that we just have to accept that there will be a trade there. I also agree that in principle we could ~give up on the ETC and just accept that it will observe each backup tile to ~600 s and then get the EFFTIMEs right only in the offline pipeline. It would be better to keep both in sync, though. |
@julienguy thinks that it should be straightforward to generate TSNR2s for representative data; pinging him here. He also knows about the TSNR2 <-> EFFTIME calibration. |
To report an experiment:
I took all 1310 Backup tiles in Kibo. In each, I took the first petal
(usually 0) from the cumulative directory and consider the
MEDIAN_COADD_SNR_R column in the scores HDU of the coadd-* file. I then
compute the mean SNR of the stars between G=17.5 and 18, as well as the
mean SNR of the stars between 15.5 and 16.5.
These plots are those SNRs versus themselves and versus the total exposure
time. In the histogram, the label shows the 16/50/84 quantiles.
My impression is that there is less scatter in SNR at G=17.75 than I
feared, although we should remember that exposure time scales as the square
of the SNR. Tiles with multiple visits do have somewhat more SNR than the
single visits, but not grossly so. That said, some tiles are very
over-exposed.
Unfortunately, this is not the right data set to compare to observing
conditions, because any exposure time > 600 sec is coadded across multiple
nights. There are just over 2500 visits for these 1310 tiles, so an
average of two visits. And some of these visits will be dreadful
conditions, so it's hard to tease out what is really an overexposure.
Still, if we take the cases that did finish in one exposure, we get some
vote on what is possible. These are shown in the two figs that end in
_nexp1 and have "Single Exposure" in the title. Presumably these tiles
were observed in reasonably good conditions, probably with darker skies.
These achieve a median SNR at G=17.75 of 13.7, with 84% above SNR=12.
I compare this to a sample of 104 Bright tiles, processed in the same way,
but with the SNR being computed with G in the range [18.5,19]. These yield
a median SNR of 11.7, with 16-84% of 10.6 to 13.6. I think it's not
surprising that there is a mild positive skewness: we know that some
conditions will favor point sources more than the background-limited BGS.
But I think this indicates that an SNR ~ 12 is standard that the MWS has
set for its faint end, and it appears that the Backup program can achieve
that at G=17.75 in a single exposure (of up to 600 sec) in some reasonably
common circumstances.
I note that if we had an ETC that was accurate in taking us to SNR=12, then
the current single-exposure cases would have a median completion time of
368 seconds, with a 16-84 range of 236 to 530. This is interestingly less
than 600 seconds.
Whether the Backup program is achieving this SNR=12 standard in other
reasonably clear/sharp conditions that are currently being called partially
complete is not something I can judge here.
So, to summarize:
* SNR = 12 at G=18.75 is typical of the MWS, so I think that is a
plausible standard to match.
* We achieve SNR = 12 at G=17.75 in most single exposure Backup tiles, with
a median time of 360 seconds.
* But just over half of the tiles require more than one visit, and I don't
know how often we could have obtained SNR = 12 in a single visit that is
currently being rated as partial.
Thanks!
Daniel
…On Mon, Sep 16, 2024 at 5:46 PM Eddie Schlafly ***@***.***> wrote:
@julienguy <https://github.com/julienguy> thinks that it should be
straightforward to generate TSNR2s for representative data; pinging him
here. He also knows about the TSNR2 <-> EFFTIME calibration.
—
Reply to this email directly, view it on GitHub
<#2370 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAPQRGXQ7PMXM5GXI6ILCTTZW5GSBAVCNFSM6AAAAABOBZBT4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJUGA4DEOBWGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Adding the figures here: Here are the G=16 figs: Comparison to G=17.75 Here are the G=17.75 figs: Here is G=17.75 for the cases where we have only one exposure Backup_snr18_exptime_nexp1.pdf And here's the Bright data at G=18.75. |
Thank you Daniel! Just a followup/extension from what you've done. I've taken first exposure (cframe) of backup tiles that are repeated vs are considered done after first exposure. The notebook used to produce the plots is on NERSC here: From my point of view if we require something equivalent ot SN(R) = 5-7 at G=18, that will significantly reduce the number of required repeats, while still being scientifically useful. |
Thanks, Sergey!
Another consideration might be how well we do on the first Backup tile of
the night. We know that this is a recurring opportunity, often in
reasonable conditions save for the fading twilight, and it seems
counter-productive to set up a situation where most of these require a
repeat.
Best,
Daniel
…On Fri, Sep 20, 2024 at 8:31 AM Sergey Koposov ***@***.***> wrote:
Thank you David!
Just a followup/extension from what you've done. I've taken first
*exposure* (cframe) of backup tiles that are repeated vs are considered
done after first exposure.
Here's how the SN(R) depends on magnitude for both:
image.png (view on web)
<https://github.com/user-attachments/assets/ebace00e-0631-4ee2-824d-cd962455877f>
The notebook used to produce the plots is on NERSC here:
https://data.desi.lbl.gov/desi/users/koposov/backup_investigation/
From my point of view if we require something equivalent ot SN(R) = 5-7 at
G=18, that will significantly reduce the number of required repeats, while
still being scientifically useful.
—
Reply to this email directly, view it on GitHub
<#2370 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAPQRGURVC6SFR2ICEBSW3LZXQIRZAVCNFSM6AAAAABOBZBT4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNRTGYZDOMJXGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Looking at specifically very first exposure of the night. It seems that when they are marked for repeats they are typically significantly worse than usual: Overall given the tight sequence for the tiles requiring just one exposure and the fact that exposures on tiles requiring repeats always are below in the SNR vs mag space tells me that we maybe can really just adjust the GOALTIME for backup, and this will fix the original problem. |
@rongpu , just adding you to this thread in case you want to weigh in on the DESI E(B-V) program in the context of the backup effective times. |
In the spirit of something concrete & discussion on slack, |
I'm in favor of making the easy change to the definition of EFFTIME_SPEC for the backup tiles. Can we also get the corresponding fix to the ETC in, though, so that EFFTIME_ETC and EFFTIME_SPEC remain in rough sync? We'll also probably want to trigger a large run where we recompute the EFFTIME_SPEC for ~all of the backup tiles, to update them to use the no-extinction varieties. |
Maybe I miss something, but from my previous look at the ETC, it seemed to me that the extinction is correctly ignored for backup tiles. I.e. this line And it is then used here |
Yes, we tried to get this to work, but I feel like we must have failed; I think I have found that EFFTIME_ETC / EFFTIME_SPEC doesn't depend on extinction. |
So, do we have a consensus to merge this PR? (which ignore extinction in the TSNR of the backup targets). |
@schlafly On the ETC, is there a way to run it somehow to test what's going on ? desietc doesn't seem to have a test suite, so I don't quite know how one/I could test if/what ebv does. |
@dkirkby , is there a way to run the ETC offline, post-facto? |
Yes. If you pip-install the desietc package, it installs an |
@sbailey: The surveyops team thinks we can merge this PR and the associated PR desihub/desietc#14. There will then be some additional work that needs done:
|
Since Klaus is busy with bright time testing, please coordinate with him for when that can be done by whom, then merge+deploy whenever everyone is ready and available. i.e. the ball is in your court to coordinate this to completion. Thanks.
Please open a new ticket defining what needs to be done (or re-post what ticket that is). IIUC this is a cleanup step for the record, but doesn't have to be closely coordinated with the merge+deploy. Is that correct? It's not particularly viable to just rerun the tsnr afterburner on everything, so if there is a way that this could be done as a surgical tile-by-tile single-quantity patch, that might be preferable. |
And, yes, the second thought is a clean-up step that does not need to be coordinated with the merge+deploy and could be considered a separate issue for later discussion. |
@dkirkby: Are you comfortable with Sergey's change in desihub/desietc#14? Klaus will want your sign off before we deploy a new version of the ETC on the mountain. |
Thinking a little about what tiles we would want to do:
Is 107 surgical enough? |
107 is surgical, and I'm also fine with running over all 1458 if the update is isolated to "run this code on these files and update this single column". That is especially true if someone from surveyops or backup program folks could take the lead for what needs to be patched and how. I'm trying to avoid "Anthony please rerun the tsnr afterburner on all nights that had a backup tile" just to pickup this one change. |
I have just tagged 0.1.19 of the desietc package that includes desihub/desietc#14. The next step is for Klaus to install and deploy this version on the mountain, then it should be live. |
Klaus has installed and deployed the new version of the ETC on the mountain, so I'm merging this PR. |
Since there was no issue associated with this PR, I'll post this comment here. I see this PR changed the EFFTIME_SPEC calculation and a related PR changed the EFFTIME_ETC calculation. Should the EFFTIME_GFA calculation also be updated? If yes, the the relevant line(s) to modify are here: desispec/py/desispec/efftime.py Line 90 in 730aa44
Which is inside the function desispec/py/desispec/efftime.py Line 13 in 730aa44
The tsnr afterbuner calls desispec/bin/desi_tsnr_afterburner Line 1213 in 730aa44
|
Yes, I think it would be better to also update that. It doesn't affect operations directly but ideally all the different EFFTIMEs should be tabulating roughly the same things. |
FWIW I've created the PR #2395 that gets rid of the E(B-V) factor for backup in efftime.py |
@akremin: Are we satisfied that all the |
Yes, all three EFFTIME's are now computed without the E(B-V) term. We may want to check that these three are consistent with one another once we have larger statistics (we currently only have one or two backup tiles observed since we updated the GFA efftime). But that will take some time and shouldn't prevent us from closing this out. |
Following recent discussion on slack @schlafly
This is a PR (in development) addressing some issues with the effective exposure times of backup program
At the moment that just does removes the dust extinction correction when computing the tsnr2 for gbbackup/backup tracers.
Another change that potentially needs changing is the nominal brightness for Poisson term
https://github.com/desihub/desispec/blob/801464161cc6385ed8f1fe185f65bcda423f5b43/py/desispec/tsnr.py#L588C9-L589C65
Change of the template for backup program ?
And changes to desietc ?
I would appreciate here any suggestions on the easiest way to test the effect of changes of tsnr.py on individual exposures without running the full pipeline. That would help understand the effect of changes.