-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keck KCRM #1650
Keck KCRM #1650
Conversation
# Conflicts: # pypeit/find_objects.py
# Conflicts: # pypeit/utils.py
# Conflicts: # CHANGES.rst
# Conflicts: # CHANGES.rst
# Conflicts: # pypeit/core/arc.py # pypeit/core/wavecal/autoid.py # pypeit/wavecalib.py
# Conflicts: # pypeit/core/wavecal/autoid.py
# Conflicts: # pypeit/core/wavecal/autoid.py # pypeit/par/pypeitpar.py # pypeit/spectrographs/keck_kcwi.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is great, thanks!
# Use the FWHM map determined from the arc lines to convert the science frame | ||
# to have the same effective spectral resolution. | ||
fwhm_map = self.wv_calib.build_fwhmimg(self.tilts, self.slits, initial=True, spat_flexure=self.spat_flexure_shift) | ||
thismask = thismask & (fwhm_map != 0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please add some documentation to this routine explaining how the joint sky subtraction works. I don't follow the philosophy here. Why you are convolving the image to a single fwhm and then fitting the global_sky. It looks like this is only used to compute the scaleimg. A detailed description of the control flow of the routine is needed in the docstring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are exactly right. This is because once you get a model of the sky, all of the various deconvolution routines that I have attempted produced terrible results. The best was Weiner deconvolution. I think it works well for very small deviations from a uniform FWHM, but for the large deviations seen in KCWI/KCRM, this is virtually impossible to get a good result. I'm currently at a loss to get the convolution/deconvolution to work.
For now, I'll add a note to the documentation to describe how this is currently working. I think the only way forward is to have a single underlying "intrinsic" model that gets convolved with a grid of kernels, and evaluated. But, I fear this is far too computationally expensive. For KCWI/KCRM, I think the recommended approach is to have separate sky frames if you care about sky subtraction.
pypeit/flatfield.py
Outdated
Image data that will be used to estimate the spectral relative sensitivity | ||
waveimg : `numpy.ndarray`_ | ||
Wavelength image | ||
slits : :class:`~pypeit.slittrace.SlitTraceSet` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slits is an object. Instead pass in the boundaries, slit_left and slit_right which are arrays. That is our convention in core.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point... Although this is not in core, it probably should be, and I've made the changes to pass in the arrays instead of the slits object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay my bad. Can you move it to core and wrap here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep - I've already moved it to core :-)
(note, this file is marked "outdated")
pypeit/utils.py
Outdated
harr = np.arange(-hwid, +hwid + 1) | ||
fullsize = nspec + kernsize - 1 | ||
fsize = 2 ** int(np.ceil(np.log2(fullsize))) # Use this size for a more efficient computation | ||
conv = np.fft.fft(img, fsize, axis=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should you be using this function: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html
instead of doing it yourself. I suspect it is faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I benchmarked this, and you're right... so I've made this change.
from scipy import signal
import numpy as np
def fft_orig(img, kernel):
nspec = img.shape[0]
kernsize = kernel.size
hwid = (kernsize - 1) // 2
harr = np.arange(-hwid, +hwid + 1)
fullsize = nspec + kernsize - 1
fsize = 2 ** int(np.ceil(np.log2(fullsize))) # Use this size for a more efficient computation
conv = np.fft.fft(img, fsize, axis=0)
conv *= np.fft.fft(kernel / np.sum(kernel), fsize)[:, None]
img_conv = np.fft.ifft(conv, axis=0).real[hwid:hwid + nspec, :]
return img_conv
def fft_scipy(img, kernel):
return signal.fftconvolve(img, kernel, mode='same', axes=0)
img = np.random.normal(0.0,1.0,(511,511))
kernwid = 3
kernsize = 2 * int(5 * kernwid + 0.5) + 1
midp = (kernsize - 1) // 2
xkern = np.arange(kernsize+1, dtype=int) - midp
kern = np.exp(-0.5 * (xkern / kernwid) ** 2)
kern = kern / np.sum(kern)
tmp1 = fft_orig(img, kern)
kernrsh = kern.reshape(kern.size,1)
tmp2 = fft_scipy(img, kernrsh)
print(np.allclose(tmp1, tmp2))
# %timeit fft_orig(img, kern)
# %timeit fft_scipy(img, kernrsh)
511x511
11.4 ms ± 475 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.76 ms ± 212 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
512x512
11.5 ms ± 736 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
4.11 ms ± 225 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
513x513
8.79 ms ± 322 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.33 ms ± 48 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HI @rcooke-ast relatively minor comments and changes requested. Thanks for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addresed PR comments
pypeit/flatfield.py
Outdated
Image data that will be used to estimate the spectral relative sensitivity | ||
waveimg : `numpy.ndarray`_ | ||
Wavelength image | ||
slits : :class:`~pypeit.slittrace.SlitTraceSet` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point... Although this is not in core, it probably should be, and I've made the changes to pass in the arrays instead of the slits object.
# Use the FWHM map determined from the arc lines to convert the science frame | ||
# to have the same effective spectral resolution. | ||
fwhm_map = self.wv_calib.build_fwhmimg(self.tilts, self.slits, initial=True, spat_flexure=self.spat_flexure_shift) | ||
thismask = thismask & (fwhm_map != 0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are exactly right. This is because once you get a model of the sky, all of the various deconvolution routines that I have attempted produced terrible results. The best was Weiner deconvolution. I think it works well for very small deviations from a uniform FWHM, but for the large deviations seen in KCWI/KCRM, this is virtually impossible to get a good result. I'm currently at a loss to get the convolution/deconvolution to work.
For now, I'll add a note to the documentation to describe how this is currently working. I think the only way forward is to have a single underlying "intrinsic" model that gets convolved with a grid of kernels, and evaluated. But, I fear this is far too computationally expensive. For KCWI/KCRM, I think the recommended approach is to have separate sky frames if you care about sky subtraction.
pypeit/utils.py
Outdated
harr = np.arange(-hwid, +hwid + 1) | ||
fullsize = nspec + kernsize - 1 | ||
fsize = 2 ** int(np.ceil(np.log2(fullsize))) # Use this size for a more efficient computation | ||
conv = np.fft.fft(img, fsize, axis=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I benchmarked this, and you're right... so I've made this change.
from scipy import signal
import numpy as np
def fft_orig(img, kernel):
nspec = img.shape[0]
kernsize = kernel.size
hwid = (kernsize - 1) // 2
harr = np.arange(-hwid, +hwid + 1)
fullsize = nspec + kernsize - 1
fsize = 2 ** int(np.ceil(np.log2(fullsize))) # Use this size for a more efficient computation
conv = np.fft.fft(img, fsize, axis=0)
conv *= np.fft.fft(kernel / np.sum(kernel), fsize)[:, None]
img_conv = np.fft.ifft(conv, axis=0).real[hwid:hwid + nspec, :]
return img_conv
def fft_scipy(img, kernel):
return signal.fftconvolve(img, kernel, mode='same', axes=0)
img = np.random.normal(0.0,1.0,(511,511))
kernwid = 3
kernsize = 2 * int(5 * kernwid + 0.5) + 1
midp = (kernsize - 1) // 2
xkern = np.arange(kernsize+1, dtype=int) - midp
kern = np.exp(-0.5 * (xkern / kernwid) ** 2)
kern = kern / np.sum(kern)
tmp1 = fft_orig(img, kern)
kernrsh = kern.reshape(kern.size,1)
tmp2 = fft_scipy(img, kernrsh)
print(np.allclose(tmp1, tmp2))
# %timeit fft_orig(img, kern)
# %timeit fft_scipy(img, kernrsh)
511x511
11.4 ms ± 475 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.76 ms ± 212 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
512x512
11.5 ms ± 736 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
4.11 ms ± 225 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
513x513
8.79 ms ± 322 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.33 ms ± 48 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This PR adds minimal support for Keck/KCRM data reduction, and some additional features to the sky subtraction to account for the non-uniform FWHM map of the KCWI+KCRM field of view.
A companion PR to the dev suite will follow, once I get approval to include some frames in the dev-suite. Here are the wavelength solutions currently implemented:
RM1:
RM2: