-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Photon counting step function #260
base: main
Are you sure you want to change the base?
Conversation
… fixed a minor error in combine.py almost finished with unit tests for photon counting
so that I can leave the "OPTIONAL" out of the recipe in prescan subtraction since we do want the calibrated bias offset ideally
Okay, all done with my little changes now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! I added a bunch of small comments throughout. The two bigger comments are:
- Should we have a separate piece of code to produce a photon-counted dark which will save it and allow it to be tracked in the caldb? Also should we have a separate step function to do the dark subtraction in photon counting mode?
- Is the big unit test you wrote really an e2e test? And we should make it as such, and write a smaller unit test?
@@ -166,7 +167,7 @@ def create_synthesized_master_dark_calib(detector_areas): | |||
# image area, including "shielded" rows and cols: | |||
imrows, imcols, imr0c0 = imaging_area_geom('SCI', detector_areas) | |||
prerows, precols, prer0c0 = unpack_geom('SCI', 'prescan', detector_areas) | |||
|
|||
np.random.seed(4567) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally, seeding should be done by the function that calls this function, not this function itself (by doing it here, it will always produce the same result and can't be overridden).
ill_mean (float): mean electron count value simulated in the illuminated frames | ||
dark_mean (float): mean electron count value simulated in the dark frames | ||
''' | ||
np.random.seed(1234) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment with seeding as above.
datasets, vals = test_dataset[0].split_dataset(prihdr_keywords=['VISTYPE']) | ||
if len(vals) != 2: | ||
raise PhotonCountException('There should only be 2 datasets, and one should have \'VISTYPE\' = \'DARK\'.') | ||
dark_dataset = datasets[vals.index('DARK')] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we be processing and producing a photon-counted dark separate for this step function? and then we feed in the photon-counted dark as a calibration file? I think this might be cleaner, and let us save the photon counted dark as an output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the only reason we made the synthesized and traditional master darks into calibration products was because they were CTC deliverables, so I didn't make the photon-counted dark into a separate calibration product because I don't think it is. Is this accurate, @mygouf ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The photon-counting step function is basically the equivalent of mean-combine for an analog stack of frames. The photon-counted dark resulting from the stack of input darks is observation-specific, but if for some reason we do want it, I could add a return for it or add an ImageHDU for it in the output dataset?
for dataset in [ill_dataset, dark_dataset]: | ||
# getting number of read noise standard deviations at which to set the | ||
# photon-counting threshold | ||
T_factor = detector_params.params['T_factor'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we allow an argument in this function to allow the user to override this value, rather than needing to modify detector_params? This seems like a tunable parameter, rather than a property of the detector that we configured.
thresh = T_factor*read_noise | ||
if thresh < 0: | ||
raise PhotonCountException('thresh must be nonnegative') | ||
if not isinstance(niter, (int, np.integer)) or niter < 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we put this check at the start of the function, before we do any serious processing? I usually prefer that for organization sake to put the input error checking in the beginning.
if not isinstance(niter, (int, np.integer)) or niter < 1: | ||
raise PhotonCountException('niter must be an integer greater than ' | ||
'0') | ||
try: # if EM gain measured directly from frame TODO change hdr name if necessary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to use statements like if 'EMGAIN_M' in dataset.frames[0].ext_hdr:
, as it's more direct and doesn't require try/else (which feels overkill here).
if thresh >= em_gain: | ||
warnings.warn('thresh should be less than em_gain for effective ' | ||
'photon counting') | ||
if np.average(dataset.all_data) > 0.1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we use np.nanmean in case there are nan's from cosmic rays?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't be any NaNs at this point since the bad pixel step function is applied after this step function, but I can change it since it wouldn't hurt anything.
"calibs" : { | ||
"DetectorParams" : "AUTOMATIC" | ||
} | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the FDD, there should be a step where we also mark pixels as bad using the master bad pixel map. I don't think this is being accounted for elsewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw that that step happens after the photon-counting step?
up = mean_expected_up + pc_variance | ||
low = mean_expected_low - pc_variance | ||
errs.append(np.max([up - mean_expected, mean_expected - low], axis=0)) | ||
dq = (nframes == 0).astype(int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason specifically to use only the 0/1 bits of the DQ map, and not use the higher value bits? I don't seem to understand why we are erasing the reasons for bad DQ pixels. Can we just do an AND or some other logic to keep the history of which pixels are bad for what reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Essentially a mean-combine is performed on the frames, so the particular DQ flags become irrelevant (same process that is done with mean_combine).
|
||
detector_params = data.DetectorParams({}) | ||
|
||
def test_expected_results(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be an e2e test? It seems like it is testing the full use case like an e2e test. Another way to phrase the question: is there a different e2e test you were thinking of writing?
A unit test would test just the inputs and outputs of the step function (rather than going through the whole walker infrastructure).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do that!
Describe your changes
Added a step function for photon counting frames.
Type of change
Reference any relevant issues (don't forget the #)
#2
Checklist before requesting a review