You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Combined spectra.
In SDSS we used "spPlate" files, named by the plugplate and MJD of observation but this is not suitable for
PFS where we:
1. Will split observations of the same object over multiple nights
2. Will potentially reconfigure the PFI between observations.
I don't think it makes sense to put multiple spectra together based on sky coordinates as we may go back and
add more observations later, so I think we're forced to separate files for every object. That's a lot of
files, but maybe not too bad? We could use a directory structure based on HSC's (tract, patch) -- note that
these are well defined even if we are not using HSC data to target. An alternative would be to use a
healpix or HTM id.
Because we may later obtain more data on a given object, or decide that some data we have already taken is
bad, or process a number of subsets of the available data, there may be more than one set of visits used
to produce a pfsObject file for a given object. We therefore include both the number of visits (nVisit)
and a SHA-1 hash of the visits, pfsVisitHash. We use both as nVisits may be ambiguous, while pfsVisitHash
isn't human-friendly; in particular it doesn't sort in a helpful way. It seems improbable that we will
ever have more than 1000 visits, but as the pfsVisitHash is unambiguous it seemed safer to only allow for
larger values of nVisit, but record them only modulo 1000.
"pfsObject-%05d-%05d-%s-%016x-%03d-0x%016x.fits"
% (catId, tract, patch, objId, nVisit % 1000, pfsVisitHash)
The path would be
catId/tract/patch/pfsObject-*.fits
The file will have several HDUs:
HDU #0 PDU
HDU #1 FLUX Flux in units of nJy [FLOAT] NROW
HDU #2 MASK Pixel mask [32-bit INT] NROW
HDU #3 TARGET Binary table [FITS BINARY TABLE] NFILTER
Columns for:
filterName [STRING]
fiberFlux [FLOAT]
HDU #4 SKY Sky flux in same units as FLUX [FLOAT] NROW
HDU #5 COVAR Near-diagonal part of FLUX's covariance [FLOAT] NROW*3
HDU #6 COVAR2 Low-resolution non-sparse estimate covariance [FLOAT] NCOARSE*NCOARSE
HDU #7 OBSERVATIONS Binary table [FITS BINARY TABLE] NOBS
Columns for:
visit [32-bit INT]
arm [STRING]
spectrograph [32-bit INT]
pfsDesignId [64-bit INT]
fiberId [32-bit INT]
nominal PFI position (millimeters) [FLOAT]*2
actual PFI position (millimeters) [FLOAT]*2
HDU #8 FLUXTABLE Binary table [FITS BINARY TABLE] NOBS*NROW
Columns for:
wavelength in units of nm (vacuum) [64-bit FLOAT]
intensity in units of nJy [FLOAT]
intensity error in same units as intensity [FLOAT]
mask [32-bit INT]
HDU #9 NOTES Reduction notes [FITS BINARY TABLE] NNOTES
The wavelengths are specified via the WCS cards in the header (e.g. CRPIX1,
CRVAL1) for the FLUX, MASK, SKY, COVAR extensions and explicitly in the table
for the FLUXTABLE. We chose these two representations for the data due to the
difficulty in resampling marginally sampled data onto a regular grid, while
recognising the convenience of such a grid when rebinning, performing PCAs, or
stacking spectra. For highest precision the data in the FLUXTABLE is likely to
be used.
The TARGET HDU must contain at least the keywords
catId Catalog identifier INT
tract Tract identifier INT
patch Patch identifier STRING
objId Object identifier INT
ra Right Ascension (degrees) DOUBLE
dec Declination (degrees) DOUBLE
targetType Target type enum INT
(N.b. the keywords are case-insensitive). Other HDUs should specify INHERIT=T.
See pfsArm for definition of the COVAR data
What resolution should we use for HDU #1? The instrument has a dispersion per pixel which is roughly constant
(in the blue arm Jim-sensei calculates that it varies from 0.70 to 0.65 (going red) A/pix; in the red, 0.88 to
0.82, and in the IR, 0.84 to 0.77). We propose that we sample at 0.8 A/pixel.
The second covariance table (COVAR2) is the full covariance at low spectral resolution, maybe 10x10. It's
really only 0.5*NCOARSE*(NCOARSE + 1) numbers, but it doesn't seem worth the trouble to save a few bytes.
This covariance is needed to model the spectrophotometric errors.
The reduction notes (NOTES) HDU is a FITS table HDU with a single row and a variety of columns.
The values record operations performed and measurements made during reduction for the spectrum.
Note that we don't keep the SDSS "AND" and "OR" masks -- if needs be we could set two mask bits to capture
the same information, but in practice SDSS's OR masks were not very useful.
For data taken with the medium resolution spectrograph, HDU #1 is expected to be at the resolution of
the medium arm, and to omit the data from the blue and IR arms.
Then, the loader should read wavelength, flux, and flux error from the 8th extension, while the current loader read them from the 2nd extension.
I will make a PR to fix the issue.
The text was updated successfully, but these errors were encountered:
pllim
changed the title
Subaur-pfsObject data loader fails with the latest datamodel
Subaru-pfsObject data loader fails with the latest datamodel
Dec 20, 2024
The
Subaru-pfsObject
data loader fails with the latest datamodel as follows.This is due to significant updates on the datamodel (https://github.com/Subaru-PFS/datamodel/blob/244bdeacf0e062e13b75d8d541e962b52c22bffb/datamodel.txt#L866)
Then, the loader should read wavelength, flux, and flux error from the 8th extension, while the current loader read them from the 2nd extension.
I will make a PR to fix the issue.
The text was updated successfully, but these errors were encountered: