Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux and telluric correction. #24

Merged
merged 20 commits into from
Oct 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
.tox
banzai_floyds.egg-info
build
tmp
test_data
27 changes: 11 additions & 16 deletions .github/workflows/unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,9 @@ jobs:
include:
- name: Code style checks
os: ubuntu-latest
python: '3.8'
python: '3.9'
toxenv: codestyle

- name: Python 3.7 with minimal dependencies
os: ubuntu-latest
python: '3.7'
toxenv: py37-test

- name: Python 3.9 with minimal dependencies
os: ubuntu-latest
python: '3.9'
Expand All @@ -44,24 +39,24 @@ jobs:
python: '3.10'
toxenv: py310-test

- name: Python 3.8 with all optional dependencies and coverage checking
- name: Python 3.9 with all optional dependencies and coverage checking
os: ubuntu-latest
python: '3.8'
toxenv: py38-test-alldeps-cov
python: '3.9'
toxenv: py39-test-alldeps-cov

- name: OS X - Python 3.8 with all optional dependencies
- name: OS X - Python 3.9 with all optional dependencies
os: macos-latest
python: '3.8'
toxenv: py38-test-alldeps
python: '3.9'
toxenv: py39-test-alldeps

- name: Windows - Python 3.8 with all optional dependencies
- name: Windows - Python 3.9 with all optional dependencies
os: windows-latest
python: '3.8'
toxenv: py38-test-alldeps
python: '3.9'
toxenv: py39-test-alldeps

- name: Test building of Sphinx docs
os: ubuntu-latest
python: '3.8'
python: '3.9'
toxenv: build_docs

steps:
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM docker.lco.global/banzai:1.10.1
FROM docker.lco.global/banzai:1.11.0

USER root

Expand Down
184 changes: 0 additions & 184 deletions Jenkinsfile

This file was deleted.

5 changes: 5 additions & 0 deletions banzai_floyds/data/orders/coj-order-1.reg
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Region file format: DS9 version 4.1
global color=green dashlist=8 3 width=1 font="helvetica 10 normal roman" select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
physical
polygon(662.5,491.16667,717.5,477,783.33333,463.66667,875.83333,448.66667,1007.5,435.33333,1126.6667,430.33333,1197.5,430.33333,1285,433.66667,1384.1667,438.66667,1478.75,445.86111,1561,454.16667,1631,461.16667,1687,468.16667,1778.0556,481.97222,1868.3333,496.55556,1947,510.16667,2048,511.16667,2048,432.16667,1899,405.16667,1823,393.16667,1772,385.16667,1738.4722,379.88889,1735.6944,381.27778,1734.3056,379.88889,1701.6667,376.41667,1698.1944,374.33333,1657.9167,370.86111,1614.8611,366,1579.4444,361.83333,1519.7222,355.58333,1464.8611,350.72222,1422.2222,347.22222,1400.6944,346.52778,1372.9167,344.44444,1333.3333,340.97222,1290.9722,339.58333,1250.6944,338.88889,1206.25,338.88889,1145.1389,339.58333,1105.5556,340.27778,1039.5833,343.75,939.58333,350.69444,886.80556,357.63889,856.94444,362.5,827.77778,366.66667,791.66667,373.61111,739.58333,384.02778,693.05556,394.44444,668.75,400.69444,643.05556,407.63889,552.77778,434.02778,470.11111,468.08333,399.97222,511.83333,584,511.16667,603.33333,507)
polygon(1.4352,189.16,107,171,212,154,318,142,442,137,735,154,875,166,1020,182,1338,228,1745,301,1749,206,1444,150,1335,133,1124,102,991,84,920,77,684,55,524,50,412,49,306,54,165,66,75,83,1,103)
4 changes: 4 additions & 0 deletions banzai_floyds/data/orders/coj-order-2.reg
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Region file format: DS9 version 4.1
global color=green dashlist=8 3 width=1 font="helvetica 10 normal roman" select=1 highlite=1 dash=0 fixed=0 edit=1 move=1 delete=1 include=1 source=1
physical
polygon(662.5,491.16667,717.5,477,783.33333,463.66667,875.83333,448.66667,1007.5,435.33333,1126.6667,430.33333,1197.5,430.33333,1285,433.66667,1384.1667,438.66667,1478.75,445.86111,1561,454.16667,1631,461.16667,1687,468.16667,1778.0556,481.97222,1868.3333,496.55556,1947,510.16667,2048,511.16667,2048,432.16667,1899,405.16667,1823,393.16667,1772,385.16667,1738.4722,379.88889,1735.6944,381.27778,1734.3056,379.88889,1701.6667,376.41667,1698.1944,374.33333,1657.9167,370.86111,1614.8611,366,1579.4444,361.83333,1519.7222,355.58333,1464.8611,350.72222,1422.2222,347.22222,1400.6944,346.52778,1372.9167,344.44444,1333.3333,340.97222,1290.9722,339.58333,1250.6944,338.88889,1206.25,338.88889,1145.1389,339.58333,1105.5556,340.27778,1039.5833,343.75,939.58333,350.69444,886.80556,357.63889,856.94444,362.5,827.77778,366.66667,791.66667,373.61111,739.58333,384.02778,693.05556,394.44444,668.75,400.69444,643.05556,407.63889,552.77778,434.02778,470.11111,468.08333,399.97222,511.83333,584,511.16667,603.33333,507)
15 changes: 15 additions & 0 deletions banzai_floyds/data/standards/README
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
Relevant links:
https://www.eso.org/sci/observing/tools/standards/spectra/stanlis.html
https://ftp.eso.org/pub/usg/standards/ctiostan/
https://ftp.eso.org/pub/stecf/standards/okestan/

TODO: More involved detail is needed about how this data is actually accessed and used. Specifically WHAT standards are needed.

Convert flux units
TODO: What are the final desired Units

Save table as fits file in banzai_floyds/data
TODO: We should Possibly write a small bit of utility code that handles this and results in the correct final format for the fits file

Add RA/DEC to fits header in decimal degrees
TODO: Include this step in the utility code above.
Binary file added banzai_floyds/data/standards/bdp28d4211.fits
Binary file not shown.
Binary file added banzai_floyds/data/standards/feige110.fits
Binary file not shown.
62 changes: 62 additions & 0 deletions banzai_floyds/dbs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
from banzai.dbs import Base
from sqlalchemy import Column, Integer, String, Float
from banzai.dbs import get_session
from astropy.coordinates import SkyCoord
from astropy import units
from banzai.utils.fits_utils import open_fits_file
from astropy.table import Table
import pkg_resources
from glob import glob
import os
from astropy.io import fits


def get_standard(ra, dec, db_address, offset_threshold=5):
jchate6 marked this conversation as resolved.
Show resolved Hide resolved
"""
Check if a position is in the table of flux standards

ra: float
RA in decimal degrees
dec: float
Declination in decimal degrees
db_address: str
Database address in SQLAlchemy format
offset_threshold: float
Match radius in arcseconds
"""
found_standard = None
test_coordinate = SkyCoord(ra, dec, unit=(units.deg, units.deg))
with get_session(db_address) as db_session:
standards = db_session.query(FluxStandard).all()
for standard in standards:
standard_coordinate = SkyCoord(standard.ra, standard.dec, unit=(units.deg, units.deg))
if standard_coordinate.offset(test_coordinate) < (offset_threshold * units.arcsec):
found_standard = standard
if found_standard is not None:
found_standard = open_fits_file({'path': found_standard.filepath, 'frameid': found_standard.frame_id,
'filename': found_standard.filename})

return Table(found_standard)


class FluxStandard(Base):
__tablename__ = 'fluxstandards'
id = Column(Integer, primary_key=True, autoincrement=True)
filename = Column(String(100), unique=True)
filepath = Column(String(150))
frameid = Column(Integer, nullable=True)
ra = Column(Float)
dec = Column(Float)


def ingest_standards(db_address):
standard_files = glob(pkg_resources.resource_filename('banzai_floyds.tests', 'data/standards/*.fits'))
for standard_file in standard_files:
standard_hdu = fits.open(standard_file)
standard_record = FluxStandard(filename=os.path.basename(standard_file),
filepath=os.path.dirname(standard_file),
ra=standard_hdu[0].header['RA'],
dec=standard_hdu[0].header['DEC'])
with get_session(db_address) as db_session:
db_session.add(standard_record)
db_session.commit()
12 changes: 8 additions & 4 deletions banzai_floyds/extract.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from banzai.stages import Stage
import numpy as np
from astropy.table import Table, vstack
from banzai_floyds.matched_filter import maximize_match_filter
from banzai_floyds.matched_filter import optimize_match_filter
from numpy.polynomial.legendre import Legendre
from banzai_floyds.utils.fitting_utils import gauss, fwhm_to_sigma, Legendre2d

Expand Down Expand Up @@ -71,7 +71,7 @@ def fit_profile(data, profile_width=4):
for data_to_fit in data.groups:
# Pass a match filter (with correct s/n scaling) with a gaussian with a default width
initial_guess = (data_to_fit['y_order'][np.argmax(data_to_fit['data'])], 0.05)
best_fit_center, _ = maximize_match_filter(initial_guess, data_to_fit['data'], data_to_fit['uncertainty'],
best_fit_center, _ = optimize_match_filter(initial_guess, data_to_fit['data'], data_to_fit['uncertainty'],
profile_gauss_fixed_width, data_to_fit['y_order'],
args=(fwhm_to_sigma(profile_width),))
# If the peak pixel of the match filter is > 2 times the median (or something like that) keep the point
Expand Down Expand Up @@ -107,12 +107,14 @@ def fit_profile_width(data, profile_fits, poly_order=3, background_poly_order=2,
if peak_snr < 2.0 * median_snr:
continue

# TODO: Only fit the profile width where it is much larger than the background value,
# otherwise use a heuristic width
# Pass a match filter (with correct s/n scaling) with a gaussian with a default width
initial_coeffs = np.zeros(background_poly_order + 1)
initial_coeffs[0] = np.median(data_to_fit['data']) / data_to_fit['data'][peak]

initial_guess = fwhm_to_sigma(default_width), *initial_coeffs
best_fit_sigma, *_ = maximize_match_filter(initial_guess, data_to_fit['data'],
best_fit_sigma, *_ = optimize_match_filter(initial_guess, data_to_fit['data'],
data_to_fit['uncertainty'],
background_fixed_profile_center,
data_to_fit['y_order'],
Expand Down Expand Up @@ -140,7 +142,9 @@ def fit_background(data, profile_centers, profile_widths, x_poly_order=2, y_poly
# Pass a match filter (with correct s/n scaling) with a gaussian with a default width
initial_coeffs = np.zeros((x_poly_order + 1) + y_poly_order)
initial_coeffs[0] = np.median(data_to_fit['data']) / data_to_fit['data'][peak]
best_fit_coeffs = maximize_match_filter(initial_coeffs, data_to_fit['data'],
# TODO: Fit the background with a totally fixed profile, and no need to iterate
# since our filter is linear
best_fit_coeffs = optimize_match_filter(initial_coeffs, data_to_fit['data'],
data_to_fit['uncertainty'],
background_fixed_profile,
(data_to_fit['wavelength'], data_to_fit['y_order']),
Expand Down
Loading
Loading