Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utthishastro restructure feature extractor files master #166

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,7 @@ build:

sphinx:
configuration: docs/conf.py

python:
install:
- requirements: "requirements.txt"
Binary file removed docs/_build/html/_images/SN848233.png
Binary file not shown.
Binary file removed docs/_build/html/_images/active_learning_loop.png
Binary file not shown.
Binary file removed docs/_build/html/_images/canonical.png
Binary file not shown.
Binary file removed docs/_build/html/_images/diag.png
Binary file not shown.
Binary file removed docs/_build/html/_images/time_domain.png
Binary file not shown.
4 changes: 3 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,11 @@

master_doc = 'index'
extensions = ['sphinx.ext.autodoc',
'sphinx_rtd_theme',
'sphinx.ext.autosummary',
'sphinx.ext.mathjax',
'sphinx.ext.napoleon']
'sphinx.ext.napoleon',
'sphinx_rtd_theme']
#'sphinx_automodapi.smart_resolver',
#'sphinx_automodapi.automodapi']

Expand Down
13 changes: 7 additions & 6 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The code has been modify for the task of enabling photometric supernova cosmolog
Getting started
===============

This code was developed for ``Python3`` and was not tested in Windows.
This code was developed for ``Python3`` and was not tested in Windows.

We recommend that you work within a `virtual environment <https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/>`_.

Expand All @@ -32,15 +32,15 @@ Navigate to a ``working_directory`` where you will store the new virtual environ

>>> python3.10 -m venv resspect

.. hint:: Make sure you deactivate any ``conda`` environment you might have running before moving forward.
.. hint:: Make sure you deactivate any ``conda`` environment you might have running before moving forward.

Once the environment is set up you can activate it:

.. code-block:: bash

>>> source <working_directory>/bin/activate

You should see a ``(resspect)`` flag in the extreme left of terminal command line.
You should see a ``(resspect)`` flag in the extreme left of terminal command line.

Next, clone this repository in another chosen location:

Expand Down Expand Up @@ -118,10 +118,11 @@ Details of the tools available to evaluate different steps on feature extraction

Alternatively, you can also perform the full light curve fit for the entire sample from the command line.

If you are only interested in testing your installation you should work with the SNPCC data:
If you are only interested in testing your installation you should work with the SNPCC data:

.. code-block:: bash
>>> fit_dataset.py -s SNPCC -dd <path_to_data_dir> -o <output_file>

>>> fit_dataset.py -s SNPCC -dd <path_to_data_dir> -o <output_file>

Once the data has been processed you can apply the full Active Learning loop according to your needs.
A detail description on how to use this tool is provided in the :ref:`Learning Loop page <learnloop>`.
Expand Down Expand Up @@ -151,7 +152,7 @@ Acknowledgements

This work is part of the Recommendation System for Spectroscopic Followup (RESSPECT) project, governed by an inter-collaboration agreement signed between the `Cosmostatistics Initiative (COIN) <https://cosmostatistics-initiative.org/>`_ and the `LSST Dark Energy Science Collaboration (DESC) <https://lsstdesc.org/>`_.

The `COsmostatistics INitiative (COIN) <https://cosmostatistics-initiative.org>`_ is an international network of researchers whose goal is to foster interdisciplinarity inspired by Astronomy.
The `COsmostatistics INitiative (COIN) <https://cosmostatistics-initiative.org>`_ is an international network of researchers whose goal is to foster interdisciplinarity inspired by Astronomy.

COIN received financial support from `CNRS <http://www.cnrs.fr/>`_ for the development of this project, as part of its MOMENTUM programme over the 2018-2020 period, under the project *Active Learning for Large Scale Sky Surveys*.

Expand Down
2 changes: 1 addition & 1 deletion docs/plotting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ evolution of the metrics:
- Purity: fraction of correct Ia classifications;
- Figure of merit: efficiency x purity with a penalty factor of 3 for false positives (contamination).

The class `Canvas <https://resspect.readthedocs.io/en/latest/api/resspect.Canvas.html#resspect.Canvas>`_ enables you do to it using:
The class `Canvas()` enables you do to it using:

.. code-block:: python
:linenos:
Expand Down
58 changes: 29 additions & 29 deletions docs/pre_processing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ as input to the learning algorithm.

Before starting any analysis, you need to choose a feature extraction method, all light curves will then be handdled by this method. In the examples below we used the Bazin feature extraction method (`Bazin et al., 2009 <https://arxiv.org/abs/0904.1066>`_ ).

Load 1 light curve:
Load 1 light curve:
-------------------


For SNPCC:
^^^^^^^^^^
Expand All @@ -34,8 +34,8 @@ You can load this data using:

>>> lc = BazinFeatureExtractor() # create light curve instance
>>> lc.load_snpcc_lc(path_to_lc) # read data
This allows you to visually inspect the content of the light curve:

This allows you to visually inspect the content of the light curve:

.. code-block:: python
:linenos:
Expand All @@ -51,9 +51,9 @@ This allows you to visually inspect the content of the light curve:


Fit 1 light curve:
-----------
------------------

In order to feature extraction in one specific filter, you can do:
In order to feature extraction in one specific filter, you can do:

.. code-block:: python
:linenos:
Expand Down Expand Up @@ -94,11 +94,12 @@ This can be done in flux as well as in magnitude:

>>> lc.plot_fit(save=False, show=True, unit='mag')


.. figure:: images/SN729076_mag.png
:align: center
:height: 480 px
:width: 640 px
:alt: Bazing fit to light curve. This is an example from SNPCC data.
:align: center
:height: 480 px
:width: 640 px
:alt: Bazing fit to light curve. This is an example from SNPCC data.

Example of light from SNPCC data.

Expand All @@ -112,20 +113,20 @@ Before deploying large batches for pre-processing, you might want to visualize

>>> # define max MJD for this light curve
>>> max_mjd = max(lc.photometry['mjd']) - min(lc.photometry['mjd'])
>>> lc.plot_fit(save=False, show=True, extrapolate=True,

>>> lc.plot_fit(save=False, show=True, extrapolate=True,
time_flux_pred=[max_mjd+3, max_mjd+5, max_mjd+10])


.. figure:: images/SN729076_flux_extrap.png
:align: center
:height: 480 px
:width: 640 px
:alt: Bazing fit to light curve. This is an example from SNPCC data.
:align: center
:height: 480 px
:width: 640 px
:alt: Bazing fit to light curve. This is an example from SNPCC data.

Example of extrapolated light from SNPCC data.


For PLAsTiCC:
^^^^^^^^^^^^^

Expand All @@ -138,7 +139,7 @@ Reading only 1 light curve from PLAsTiCC requires an object identifier. This can

>>> path_to_metadata = '~/plasticc_train_metadata.csv'
>>> path_to_lightcurves = '~/plasticc_train_lightcurves.csv.gz'

# read metadata for the entire sample
>>> metadata = pd.read_csv(path_to_metadata)

Expand All @@ -151,7 +152,7 @@ Reading only 1 light curve from PLAsTiCC requires an object identifier. This can
'libid_cadence', 'tflux_u', 'tflux_g', 'tflux_r', 'tflux_i', 'tflux_z',
'tflux_y'],
dtype='object')

# choose 1 object
>>> snid = metadata['object_id'].values[0]

Expand Down Expand Up @@ -179,7 +180,7 @@ For SNPCC:
>>> feature_extractor = 'bazin'

>>> fit_snpcc(path_to_data_dir=path_to_data_dir, features_file=features_file)


For PLAsTiCC:
^^^^^^^^^^^^^
Expand All @@ -189,14 +190,14 @@ For PLAsTiCC:

>>> from resspect import fit_plasticc

>>> path_photo_file = '~/plasticc_train_lightcurves.csv'
>>> path_photo_file = '~/plasticc_train_lightcurves.csv'
>>> path_header_file = '~/plasticc_train_metadata.csv.gz'
>>> output_file = 'results/PLAsTiCC_Bazin_train.dat'
>>> feature_extractor = 'bazin'
>>> output_file = 'results/PLAsTiCC_Bazin_train.dat'
>>> feature_extractor = 'bazin'

>>> sample = 'train'

>>> fit_plasticc(path_photo_file=path_photo_file,
>>> fit_plasticc(path_photo_file=path_photo_file,
path_header_file=path_header_file,
output_file=output_file,
feature_extractor=feature_extractor,
Expand All @@ -207,12 +208,11 @@ The same result can be achieved using the command line:

.. code-block:: bash
:linenos:

# for SNPCC
>>> fit_dataset -s SNPCC -dd <path_to_data_dir> -o <output_file>

# for PLAsTiCC
>>> fit_dataset -s <dataset_name> -p <path_to_photo_file>
-hd <path_to_header_file> -sp <sample> -o <output_file>
>>> fit_dataset -s <dataset_name> -p <path_to_photo_file>
-hd <path_to_header_file> -sp <sample> -o <output_file>


Loading