, and instead do things like draw
- # to the HTML canvas. In this case though, we use the
to attach a
- # Graph3d to the DOM.
- @_graph = new vis.Graph3d(@el, @get_data(), @model.options)
-
- # Set Backbone listener so that when the Bokeh data source has a change
- # event, we can process the new data
- @listenTo(@model.data_source, 'change', () =>
- @_graph.setData(@get_data())
- )
-
- # This is the callback executed when the Bokeh data has an change. Its basic
- # function is to adapt the Bokeh data source to the vis.js DataSet format.
- get_data: () ->
- data = new vis.DataSet()
- source = @model.data_source
- for i in [0...source.get_length()]
- data.add({
- x: source.get_column(@model.x)[i]
- y: source.get_column(@model.y)[i]
- z: source.get_column(@model.z)[i]
- })
- return data
-
-# We must also create a corresponding JavaScript Backbone model sublcass to
-# correspond to the python Bokeh model subclass. In this case, since we want
-# an element that can position itself in the DOM according to a Bokeh layout,
-# we subclass from ``LayoutDOM``
-export class Surface3d extends LayoutDOM
-
- # This is usually boilerplate. In some cases there may not be a view.
- default_view: Surface3dView
-
- # The ``type`` class attribute should generally match exactly the name
- # of the corresponding Python class.
- type: "Surface3d"
-
- # The @define block adds corresponding "properties" to the JS model. These
- # should basically line up 1-1 with the Python model class. Most property
- # types have counterparts, e.g. ``bokeh.core.properties.String`` will be
- # ``p.String`` in the JS implementatin. Where the JS type system is not yet
- # as rich, you can use ``p.Any`` as a "wildcard" property type.
- @define {
- x: [ p.String ]
- y: [ p.String ]
- z: [ p.String ]
- data_source: [ p.Instance ]
- options: [ p.Any, OPTIONS ]
- }
-"""
-
-
-# This custom extension model will have a DOM view that should layout-able in
-# Bokeh layouts, so use ``LayoutDOM`` as the base class. If you wanted to create
-# a custom tool, you could inherit from ``Tool``, or from ``Glyph`` if you
-# wanted to create a custom glyph, etc.
-class Surface3d(LayoutDOM):
-
- # The special class attribute ``__implementation__`` should contain a string
- # of JavaScript (or CoffeeScript) code that implements the JavaScript side
- # of the custom extension model.
- __implementation__ = JS_CODE
-
- # Below are all the "properties" for this model. Bokeh properties are
- # class attributes that define the fields (and their types) that can be
- # communicated automatically between Python and the browser. Properties
- # also support type validation. More information about properties in
- # can be found here:
- #
- # http://bokeh.pydata.org/en/latest/docs/reference/core.html#bokeh-core-properties
-
- # This is a Bokeh ColumnDataSource that can be updated in the Bokeh
- # server by Python code
- data_source = Instance(ColumnDataSource)
-
- # The vis.js library that we are wrapping expects data for x, y, z, and
- # color. The data will actually be stored in the ColumnDataSource, but
- # these properties let us specify the *name* of the column that should
- # be used for each field.
- x = String
- y = String
- z = String
- color = String
-
- options = Dict(String, Any, default=DEFAULTS)
diff --git a/jwql/bokeh_templating/example/example_interface.yaml b/jwql/bokeh_templating/example/example_interface.yaml
index 0abc9cecf..7e6a32e3e 100644
--- a/jwql/bokeh_templating/example/example_interface.yaml
+++ b/jwql/bokeh_templating/example/example_interface.yaml
@@ -1,35 +1,35 @@
-- !ColumnDataSource: &dummy
+- !ColumnDataSource: &dummy # This is a dummy ColumnDataSource used to trigger the controller method whenever a slider is changed.
data:
value: []
on_change: ['data', !self.controller ]
-- !CustomJS: &callback
+- !CustomJS: &callback # This callback changes the value of the dummy ColumnDataSource data to trigger the controller method.
ref: "callback"
args:
source: *dummy
code: "\n source.data = { value: [cb_obj.value] }\n"
-- !Slider: &a_slider
+- !Slider: &a_slider # a slider for the a value
ref: "a_slider"
title: "A"
value: 4
range: !!python/tuple [1, 20, 0.1]
callback: *callback
-- !Slider: &b_slider
+- !Slider: &b_slider # a slider for the b value
ref: "b_slider"
title: "B"
value: 2
range: !!python/tuple [1, 20, 0.1]
callback: *callback
-- !ColumnDataSource: &figure_source
+- !ColumnDataSource: &figure_source # the ColumnDataSource for the figure
ref: "figure_source"
data:
x: !self.x
y: !self.y
-- !Figure: &the_figure
+- !Figure: &the_figure # the Figure itself, which includes a single line element.
ref: 'the_figure'
elements:
- {'kind': 'line', 'source': *figure_source, 'line_color': 'orange', 'line_width': 2}
-- !Document:
+- !Document: # the Bokeh document layout: a single column with the figure and two sliders
- !column:
- - *the_figure
+ - *the_figure # note the use of YAML anchors to add the Bokeh objects to the Document layout directly.
- *a_slider
- *b_slider
\ No newline at end of file
diff --git a/jwql/bokeh_templating/example/main.py b/jwql/bokeh_templating/example/main.py
index 5617f257a..3aa0ac856 100644
--- a/jwql/bokeh_templating/example/main.py
+++ b/jwql/bokeh_templating/example/main.py
@@ -1,10 +1,19 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
"""
-Created on Fri Jul 20 10:03:35 2018
+This is a minimal example demonstrating how to create a Bokeh app using
+the ``bokeh-templating`` package and the associated YAML template files.
-@author: gkanarek
+Author
+-------
+
+ - Graham Kanarek
+
+Dependencies
+------------
+
+ The user must have PyYAML, Bokeh, and the ``bokeh-templating``
+ packages installed.
"""
+
import os
import numpy as np
@@ -14,24 +23,37 @@
class TestBokehApp(BokehTemplate):
+ """This is a minimal ``BokehTemplate`` app."""
def pre_init(self):
+ """Before creating the Bokeh interface (by parsing the interface
+ file), we must initialize our ``a`` and ``b`` variables, and set
+ the path to the interface file.
+ """
+
self.a, self.b = 4, 2
self.format_string = None
self.interface_file = os.path.join(file_dir, "example_interface.yaml")
+ # No post-initialization tasks are required.
post_init = None
@property
def x(self):
+ """The x-value of the Lissajous curves."""
return 4. * np.sin(self.a * np.linspace(0, 2 * np.pi, 500))
@property
def y(self):
+ """The y-value of the Lissajous curves."""
return 3. * np.sin(self.b * np.linspace(0, 2 * np.pi, 500))
def controller(self, attr, old, new):
+ """This is the controller function which is used to update the
+ curves when the sliders are adjusted. Note the use of the
+ ``self.refs`` dictionary for accessing the Bokeh object
+ attributes."""
self.a = self.refs["a_slider"].value
self.b = self.refs["b_slider"].value
diff --git a/jwql/bokeh_templating/factory.py b/jwql/bokeh_templating/factory.py
index 6b8b18e4e..11e7d91bb 100644
--- a/jwql/bokeh_templating/factory.py
+++ b/jwql/bokeh_templating/factory.py
@@ -1,21 +1,72 @@
-#!/usr/bin/env python3
"""
-Created on Mon Feb 20 14:05:03 2017
+This module defines YAML constructors and factory functions which are
+used to create Bokeh objects parsed from YAML template files.
-@author: gkanarek
+The ``mapping_factory`` and ``sequence_factory`` functions are used to
+create a constructor function for each of the mappings (i.e., classes)
+and sequences (i.e., functions) included in the keyword map. The
+``document_constructor`` and ``figure_constructor`` functions are
+stand-alone constructors for the ``!Document`` and ``!Figure`` tag,
+respectively.
+
+Author
+-------
+
+ - Graham Kanarek
+
+Use
+---
+
+ The functions in this file are not intended to be called by the user
+ directly; users should subclass the ``BokehTemplate`` class found in
+ ``template.py`` instead. However, they can be used as a model for
+ creating new constructors for user-defined tags, which can then be
+ registered using the `BokehTemplate.register_mapping_constructor``
+ and `BokehTemplate.register_sequence_constructor`` classmethods.
+
+Dependencies
+------------
+
+ The user must have Bokeh installed.
"""
from bokeh.io import curdoc
from .keyword_map import bokeh_mappings as mappings, bokeh_sequences as sequences
-# Figures get their own constructor
+# Figures get their own constructor so we remove references to Figures from
+# the keyword maps.
Figure = mappings.pop("Figure")
del sequences["figure"]
def mapping_factory(tool, element_type):
- def mapping_constructor(loader, node):
+ """
+ Create a mapping constructor for the given tool, used to parse the
+ given element tag.
+
+ Parameters
+ ----------
+ tool : BokehTemplate instance
+ The web app class instance to which the constructor will be
+ attached. This will become ``self`` when the factory is a method,
+ and is used to both store the Bokeh objects in the
+ ``BokehTemplate.refs`` dictionary, and allow for app-wide
+ formatting choices via ``BokehTemplate.format_string``.
+
+ element_type : str
+ The Bokeh element name for which a constructor is desired. For
+ example, an ``element_type`` of ``'Slider'`` will create a
+ constructor for a Bokeh ``Slider`` widget, designated by the
+ ``!Slider`` tag in the YAML template file.
+
+ Usage
+ -----
+ See the ``BokehTemplate`` class implementation in ``template.py``
+ for an example of how this function is used.
+ """
+
+ def mapping_constructor(loader, node): #docstring added below
fmt = tool.formats.get(element_type, {})
value = loader.construct_mapping(node, deep=True)
ref = value.pop("ref", "")
@@ -51,10 +102,46 @@ def mapping_constructor(loader, node):
yield obj
mapping_constructor.__name__ = element_type.lower() + '_' + mapping_constructor.__name__
+ mapping_constructor.__doc__ = """
+ A YAML constructor for the ``{et}`` Bokeh object. This will create a ``{et}``
+ object wherever the ``!{et}`` tag appears in the YAML template file.
+ If a ``ref`` tag is specified, the object will then be stored in the
+ ``BokehTemplate.refs`` dictionary.
+
+ This constructor is used for mappings -- i.e., classes or functions
+ which primarily have keyword arguments in their signatures. If
+ positional arguments appear, they can be included in the YAML file
+ with the `args` keyword.
+ """.format(et=element_type)
+
return mapping_constructor
def sequence_factory(tool, element_type):
+ """ Create a sequence constructor for the given tool, used to parse
+ the given element tag.
+
+ Parameters
+ ----------
+ tool : BokehTemplate instance
+ The web app class instance to which the constructor will be
+ attached. This will become ``self`` when the factory is a method,
+ and is used to both store the Bokeh objects in the
+ ``BokehTemplate.refs`` dictionary, and allow for app-wide
+ formatting choices via ``BokehTemplate.format_string``.
+
+ element_type : str
+ The Bokeh element name for which a constructor is desired. For
+ example, an ``element_type`` of ``'Slider'`` will create a
+ constructor for a Bokeh ``Slider`` widget, designated by the
+ ``!Slider`` tag in the YAML template file.
+
+ Usage
+ -----
+ See the ``BokehTemplate`` class implementation in ``template.py``
+ for an example of how this function is used.
+ """
+
def sequence_constructor(loader, node):
fmt = tool.formats.get(element_type, {})
value = loader.construct_sequence(node, deep=True)
@@ -62,12 +149,29 @@ def sequence_constructor(loader, node):
yield obj
sequence_constructor.__name__ = element_type.lower() + '_' + sequence_constructor.__name__
+ sequence_constructor.__doc__ = """
+ A YAML constructor for the ``{et}`` Bokeh object. This will create a ``{et}``
+ object wherever the ``!{et}`` tag appears in the YAML template file.
+ If a ``ref`` tag is specified, the object will then be stored in the
+ ``BokehTemplate.refs`` dictionary.
+
+ This constructor is used for sequences -- i.e., classes or functions
+ which have only positional arguments in their signatures (which for
+ Bokeh is only functions, no classes).
+ """.format(et=element_type)
+
return sequence_constructor
# These constructors need more specialized treatment
def document_constructor(tool, loader, node):
+ """ A YAML constructor for the Bokeh document, which is grabbed via
+ the Bokeh ``curdoc()`` function. When laying out a Bokeh document
+ with a YAML template, the ``!Document`` tag should be used as the
+ top-level tag in the layout.
+ """
+
layout = loader.construct_sequence(node, deep=True)
for element in layout:
curdoc().add_root(element)
@@ -76,6 +180,12 @@ def document_constructor(tool, loader, node):
def figure_constructor(tool, loader, node):
+ """ A YAML constructor for Bokeh Figure objects, which are
+ complicated enough to require their own (non-factory) constructor.
+ Each ``!Figure`` tag in the YAML template file will be turned into a
+ ``Figure`` object via this constructor (once it's been registered by
+ the ``BokehTemplate`` class).
+ """
fig = loader.construct_mapping(node, deep=True)
fmt = tool.formats.get('Figure', {})
diff --git a/jwql/bokeh_templating/keyword_map.py b/jwql/bokeh_templating/keyword_map.py
index a5f89931b..502d3ef93 100644
--- a/jwql/bokeh_templating/keyword_map.py
+++ b/jwql/bokeh_templating/keyword_map.py
@@ -1,21 +1,49 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-"""
-Created on Thu Jul 19 09:54:47 2018
+"""A script to scrape the Bokeh package and collate dictionaries of
+classes and functions.
+
+The ``_parse_module`` function iterates over a module, and uses the
+``inspect`` package to sort everything in the module's namespace (as
+identified by ``inspect.getmembers``) into a dictionary of mappings
+(requiring primarily keyword arguments) and sequences (requiring
+primarily positional arguments).
+
+Note that thhe files ``surface3d.py`` and ``surface3d.ts``, used to
+create 3D surface plots, were downloaded from the Bokeh ``surface3d``
+example.
+
+Author
+-------
+
+ - Graham Kanarek
+
+Use
+---
-@author: gkanarek
+ To access the Bokeh elements, the user should import as follows:
+
+ ::
+
+ from jwql.bokeh_templating.keyword_map import bokeh_sequences, bokeh_mappings
+
+Dependencies
+------------
+
+ The user must have Bokeh installed.
"""
from bokeh import layouts, models, palettes, plotting, transform
from inspect import getmembers, isclass, isfunction
-from .bokeh_surface import Surface3d
+from .surface3d import Surface3d
bokeh_sequences = {}
bokeh_mappings = {"Surface3d": Surface3d} # Note that abstract base classes *are* included
-def parse_module(module):
+def _parse_module(module):
+ """Sort the members of a module into dictionaries of functions
+ (sequences) and classes (mappings)."""
+
test = lambda nm, mem: (not nm.startswith("_")) and (module.__name__ in mem.__module__)
seqs = {nm: mem for nm, mem in getmembers(module, isfunction) if test(nm, mem)}
maps = {nm: mem for nm, mem in getmembers(module, isclass) if test(nm, mem)}
@@ -29,6 +57,6 @@ def parse_module(module):
for module in [models, plotting, layouts, palettes, transform]:
- seqs, maps = parse_module(module)
+ seqs, maps = _parse_module(module)
bokeh_sequences.update(seqs)
bokeh_mappings.update(maps)
diff --git a/jwql/bokeh_templating/surface3d.py b/jwql/bokeh_templating/surface3d.py
new file mode 100644
index 000000000..fa71974e9
--- /dev/null
+++ b/jwql/bokeh_templating/surface3d.py
@@ -0,0 +1,60 @@
+from bokeh.core.properties import Any, Dict, Instance, String
+from bokeh.models import ColumnDataSource, LayoutDOM
+
+# This defines some default options for the Graph3d feature of vis.js
+# See: http://visjs.org/graph3d_examples.html for more details. Note
+# that we are fixing the size of this component, in ``options``, but
+# with additional work it could be made more responsive.
+DEFAULTS = {
+ 'width': '600px',
+ 'height': '600px',
+ 'style': 'surface',
+ 'showPerspective': True,
+ 'showGrid': True,
+ 'keepAspectRatio': True,
+ 'verticalRatio': 1.0,
+ 'legendLabel': 'stuff',
+ 'cameraPosition': {
+ 'horizontal': -0.35,
+ 'vertical': 0.22,
+ 'distance': 1.8,
+ }
+}
+
+
+# This custom extension model will have a DOM view that should layout-able in
+# Bokeh layouts, so use ``LayoutDOM`` as the base class. If you wanted to create
+# a custom tool, you could inherit from ``Tool``, or from ``Glyph`` if you
+# wanted to create a custom glyph, etc.
+class Surface3d(LayoutDOM):
+
+ # The special class attribute ``__implementation__`` should contain a string
+ # of JavaScript (or TypeScript) code that implements the JavaScript side
+ # of the custom extension model.
+ __implementation__ = "surface3d.ts"
+
+ # Below are all the "properties" for this model. Bokeh properties are
+ # class attributes that define the fields (and their types) that can be
+ # communicated automatically between Python and the browser. Properties
+ # also support type validation. More information about properties in
+ # can be found here:
+ #
+ # https://docs.bokeh.org/en/latest/docs/reference/core/properties.html#bokeh-core-properties
+
+ # This is a Bokeh ColumnDataSource that can be updated in the Bokeh
+ # server by Python code
+ data_source = Instance(ColumnDataSource)
+
+ # The vis.js library that we are wrapping expects data for x, y, and z.
+ # The data will actually be stored in the ColumnDataSource, but these
+ # properties let us specify the *name* of the column that should be
+ # used for each field.
+ x = String
+
+ y = String
+
+ z = String
+
+ # Any of the available vis.js options for Graph3d can be set by changing
+ # the contents of this dictionary.
+ options = Dict(String, Any, default=DEFAULTS)
diff --git a/jwql/bokeh_templating/surface3d.ts b/jwql/bokeh_templating/surface3d.ts
new file mode 100644
index 000000000..b788eba4f
--- /dev/null
+++ b/jwql/bokeh_templating/surface3d.ts
@@ -0,0 +1,142 @@
+// This file contains the JavaScript (TypeScript) implementation
+// for a Bokeh custom extension. The "surface3d.py" contains the
+// python counterpart.
+//
+// This custom model wraps one part of the third-party vis.js library:
+//
+// http://visjs.org/index.html
+//
+// Making it easy to hook up python data analytics tools (NumPy, SciPy,
+// Pandas, etc.) to web presentations using the Bokeh server.
+
+import {HTMLBox, HTMLBoxView} from "models/layouts/html_box"
+import {ColumnDataSource} from "models/sources/column_data_source"
+import * as p from "core/properties"
+
+declare namespace vis {
+ class Graph3d {
+ constructor(el: HTMLElement, data: object, OPTIONS: object)
+ setData(data: vis.DataSet): void
+ }
+
+ class DataSet {
+ add(data: unknown): void
+ }
+}
+
+// This defines some default options for the Graph3d feature of vis.js
+// See: http://visjs.org/graph3d_examples.html for more details. This
+// JS object should match the Python default value.
+const OPTIONS = {
+ width: '600px',
+ height: '600px',
+ style: 'surface',
+ showPerspective: true,
+ showGrid: true,
+ keepAspectRatio: true,
+ verticalRatio: 1.0,
+ legendLabel: 'stuff',
+ cameraPosition: {
+ horizontal: -0.35,
+ vertical: 0.22,
+ distance: 1.8,
+ },
+}
+
+// To create custom model extensions that will render on to the HTML canvas or
+// into the DOM, we must create a View subclass for the model. In this case we
+// will subclass from the existing BokehJS ``HTMLBoxView``, corresponding to our.
+export class Surface3dView extends HTMLBoxView {
+ model: Surface3d
+
+ private _graph: vis.Graph3d
+
+ render(): void {
+ super.render()
+ // Create a new Graph3s using the vis.js API. This assumes the vis.js has
+ // already been loaded (e.g. in a custom app template). In the future Bokeh
+ // models will be able to specify and load external scripts automatically.
+ //
+ // Views create
elements by default, accessible as @el. Many
+ // Bokeh views ignore this default
, and instead do things like draw
+ // to the HTML canvas. In this case though, we use the
to attach a
+ // Graph3d to the DOM.
+ this._graph = new vis.Graph3d(this.el, this.get_data(), this.model.options)
+ }
+
+ connect_signals(): void {
+ super.connect_signals()
+ // Set listener so that when the Bokeh data source has a change
+ // event, we can process the new data
+ this.connect(this.model.data_source.change, () => this._graph.setData(this.get_data()))
+ }
+
+ // This is the callback executed when the Bokeh data has an change (e.g. when
+ // the server updates the data). It's basic function is simply to adapt the
+ // Bokeh data source to the vis.js DataSet format
+ get_data(): vis.DataSet {
+ const data = new vis.DataSet()
+ const source = this.model.data_source
+ for (let i = 0; i < source.get_length()!; i++) {
+ data.add({
+ x: source.data[this.model.x][i],
+ y: source.data[this.model.y][i],
+ z: source.data[this.model.z][i],
+ })
+ }
+ return data
+ }
+}
+
+// We must also create a corresponding JavaScript model subclass to
+// correspond to the python Bokeh model subclass. In this case, since we want
+// an element that can position itself in the DOM according to a Bokeh layout,
+// we subclass from ``HTMLBox``
+
+export namespace Surface3d {
+ export type Attrs = p.AttrsOf
+
+ export type Props = HTMLBox.Props & {
+ x: p.Property
+ y: p.Property
+ z: p.Property
+ data_source: p.Property
+ options: p.Property<{[key: string]: unknown}>
+ }
+}
+
+export interface Surface3d extends Surface3d.Attrs {}
+
+export class Surface3d extends HTMLBox {
+ properties: Surface3d.Props
+ __view_type__: Surface3dView
+
+ constructor(attrs?: Partial) {
+ super(attrs)
+ }
+
+ // The ``__name__`` class attribute should generally match exactly the name
+ // of the corresponding Python class. Note that if using TypeScript, this
+ // will be automatically filled in during compilation, so except in some
+ // special cases, this shouldn't be generally included manually, to avoid
+ // typos, which would prohibit serialization/deserialization of this model.
+ static __name__ = "Surface3d"
+
+ static init_Surface3d(): void {
+ // This is usually boilerplate. In some cases there may not be a view.
+ this.prototype.default_view = Surface3dView
+
+ // The @define block adds corresponding "properties" to the JS model. These
+ // should basically line up 1-1 with the Python model class. Most property
+ // types have counterparts, e.g. ``bokeh.core.properties.String`` will be
+ // ``p.String`` in the JS implementatin. Where the JS type system is not yet
+ // as rich, you can use ``p.Any`` as a "wildcard" property type.
+ this.define({
+ x: [ p.String ],
+ y: [ p.String ],
+ z: [ p.String ],
+ data_source: [ p.Instance ],
+ options: [ p.Any, OPTIONS ],
+ })
+ }
+}
diff --git a/jwql/bokeh_templating/template.py b/jwql/bokeh_templating/template.py
index 987e4fd52..073d67782 100644
--- a/jwql/bokeh_templating/template.py
+++ b/jwql/bokeh_templating/template.py
@@ -1,117 +1,122 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
"""
-Created on Fri Jul 20 09:49:53 2018
+This module defines the ``BokehTemplate`` class, which can be subclassed
+to create a Bokeh web app with a YAML templating file.
-@author: gkanarek
-"""
-import yaml
-import os
-from . import factory
-from bokeh.embed import components
+Author
+-------
+ - Graham Kanarek
-class BokehTemplateParserError(Exception):
- """
- A custom error for problems with parsing the interface files.
- """
+Use
+---
+ The user should subclass the ``BokehTemplate`` class to create an
+ app, as demonstrated in ``example.py``.
-class BokehTemplateEmbedError(Exception):
- """
- A custom error for problems with embedding components.
- """
+ (A full tutorial on developing Bokeh apps with ``BokehTemplate`` is
+ forthcoming.)
-class BokehTemplate():
- """
- This is the base class for creating Bokeh web apps using a YAML templating
- framework.
+Dependencies
+------------
+
+ The user must have Bokeh and PyYAML installed.
+"""
+
+import yaml
+import os
+from . import factory
+from bokeh.embed import components
+from inspect import signature
+
+
+class BokehTemplate(object):
+ """The base class for creating Bokeh web apps using a YAML
+ templating framework.
+
+ Attributes
+ ----------
+ _embed : bool
+ A flag to indicate whether or not the individual widgets will be
+ embedded in a webpage. If ``False``, the YAML interface file
+ must include a !Document tag. Defaults to ``False``.
+ document: obje
+ The Bokeh Dpcument object (if any), equivalent to the result of
+ calling ``curdoc()``.
+ formats: dict
+ A dictionary of widget formating specifications, parsed from
+ ``format_string`` (if one exists).
+ format_string: str
+ A string of YAML formatting specifications, using the same
+ syntax as the interface file, for Bokeh widgets. Note that
+ formatting choices present in individual widget instances in the
+ interface file override these.
+ interface_file: str
+ The path to the YAML interface file.
+ refs : dict
+ A dictionary of Bokeh objects which are given ``ref`` strings in
+ the interface file. Use this to store and interact with the
+ Bokeh data sources and widgets in callback methods.
+
+ Methods
+ -------
+ ``_mapping_factory``, ``_sequence_factory``,
+ ``_figure_constructor``, and ``_document_constructor`` are imported
+ from ``bokeh_templating.factory``, used by the interface parser to
+ construct Bokeh widgets.
"""
+ # Each of these functions has a ``tool`` argument, which becomes ``self``
+ # when they are stored as methods. This way, the YAML constructors can
+ # store the Bokeh objects in the ``tool.ref`` dictionary, and can access
+ # the formatting string, if any. See ``factory.py`` for more details.
_mapping_factory = factory.mapping_factory
_sequence_factory = factory.sequence_factory
_figure_constructor = factory.figure_constructor
_document_constructor = factory.document_constructor
_embed = False
-
- def _self_constructor(self, loader, tag_suffix, node):
- """
- A multi_constructor for `!self` tag in the interface file.
- """
-
- yield eval("self" + tag_suffix, globals(), locals())
-
- def _register_default_constructors(self):
- for m in factory.mappings:
- yaml.add_constructor("!" + m + ":", self._mapping_factory(m))
-
- for s in factory.sequences:
- yaml.add_constructor("!" + s + ":", self._sequence_factory(s))
-
- yaml.add_constructor("!Figure:", self._figure_constructor)
- yaml.add_constructor("!Document:", self._document_constructor)
- yaml.add_multi_constructor(u"!self", self._self_constructor)
-
- def pre_init(self):
- """
- This should be implemented by the app subclass, to do any pre-
- initialization steps that it requires (setting defaults, loading
- data, etc).
-
- If this is not required, subclass should set `pre_init = None`
- in the class definition.
- """
-
- raise NotImplementedError
-
- def post_init(self):
- """
- This should be implemented by the app subclass, to do any post-
- initialization steps that the tool requires.
-
- If this is not required, subclass should set `post_init = None`
- in the class definition.
- """
-
- raise NotImplementedError
-
- def __init__(self):
+ document = None
+ format_string = ""
+ formats = {}
+ interface_file = ""
+ refs = {}
+
+ def __init__(self, **kwargs):
+ # Register the default constructors
self._register_default_constructors()
- # Allow for pre-init stuff from the subclass.
+ # Allow for pre-initialization code from the subclass.
if self.pre_init is not None:
- self.pre_init()
+ if signature(self.pre_init).parameters:
+ # If we try to call pre_init with keyword parameters when none
+ # are included, it will throw an error; thus, we use inspect.signature
+ self.pre_init(**kwargs)
+ else:
+ self.pre_init()
# Initialize attributes for YAML parsing
self.formats = {}
self.refs = {}
- self.document = None
# Parse formatting string, if any, and the interface YAML file
- self.include_formatting()
- self.parse_interface()
+ self._include_formatting()
+ self._parse_interface()
# Allow for post-init stuff from the subclass.
if self.post_init is not None:
self.post_init()
- def include_formatting(self):
- """
- This should simply be a dictionary of formatting keywords at the end.
- """
+ def _include_formatting(self):
+ """A utility function to parse the format string, if any."""
if not self.format_string:
return
self.formats = yaml.load(self.format_string, Loader=yaml.Loader)
- def parse_interface(self):
- """
- This is the workhorse YAML parser, which creates the interface based
- on the layout file.
-
- `interface_file` is the path to the interface .yaml file to be parsed.
+ def _parse_interface(self):
+ """Parse the YAML interface file using the registered
+ constructors
"""
if not self.interface_file:
@@ -124,27 +129,102 @@ def parse_interface(self):
with open(filepath) as f:
interface = f.read()
- # First, let's make sure that there's a Document in here
+ # If necessary, verify that the interface string contains !Document tag
if not self._embed and '!Document' not in interface:
raise BokehTemplateParserError("Interface file must contain a Document tag")
# Now, since we've registered all the constructors, we can parse the
# entire string with yaml. We don't need to assign the result to a
# variable, since the constructors store everything in self.refs
- # (and self.document, for the document)
+ # (and self.document, for the document).
+ try:
+ yaml.load_all(interface)
+ except yaml.YAMLError as exc:
+ raise BokehTemplateParserError(exc)
- self.full_stream = list(yaml.load(interface, Loader=yaml.Loader))
+ def _register_default_constructors(self):
+ """Register all the default constructors with
+ ``yaml.add_constructor``.
+ """
+ for m in factory.mappings:
+ yaml.add_constructor("!" + m + ":", self._mapping_factory(m))
- def parse_string(self, yaml_string):
- return list(yaml.load(yaml_string, Loader=yaml.Loader))
+ for s in factory.sequences:
+ yaml.add_constructor("!" + s + ":", self._sequence_factory(s))
+
+ yaml.add_constructor("!Figure:", self._figure_constructor)
+ yaml.add_constructor("!Document:", self._document_constructor)
+ yaml.add_multi_constructor(u"!self", self._self_constructor)
+
+ def _self_constructor(self, loader, tag_suffix, node):
+ """A multi_constructor for `!self` tag in the interface file."""
+ yield eval("self" + tag_suffix, globals(), locals())
def embed(self, ref):
+ """A wrapper for ``bokeh.embed.components`` to return embeddable
+ code for the given widget reference."""
element = self.refs.get(ref, None)
if element is None:
raise BokehTemplateEmbedError("Undefined component reference")
return components(element)
- def register_sequence_constructor(self, tag, parse_func):
+ @staticmethod
+ def parse_string(yaml_string):
+ """ A utility functon to parse any YAML string using the
+ registered constructors. (Usually used for debugging.)"""
+ return list(yaml.load_all(yaml_string))
+
+ def post_init(self):
+ """This should be implemented by the app subclass, to perform
+ any post-initialization actions that the tool requires.
+
+ If this is not required, the subclass should set
+ `post_init = None` in the class definition.
+ """
+
+ raise NotImplementedError
+
+ def pre_init(self, **kwargs):
+ """This should be implemented by the app subclass, to perform
+ any pre-initialization actions that it requires (setting
+ defaults, loading data, etc). Note that positional arguments are
+ not currently supported.
+
+ If this is not required, the subclass should set
+ `pre_init = None` in the class definition.
+ """
+
+ raise NotImplementedError
+
+ @classmethod
+ def register_sequence_constructor(cls, tag, parse_func):
+ """
+ Register a new sequence constructor with YAML.
+
+ Parameters
+ ----------
+ tag : str
+ The YAML tag string to be used for the constructor.
+ parse_func: object
+ The parsing function to be registered with YAML. This
+ function should accept a multi-line string, and return a
+ python object.
+
+ Usage
+ -----
+ This classmethod should be used to register a new constructor
+ *before* creating & instantiating a subclass of BokehTemplate :
+
+ ::
+
+ from bokeh_template import BokehTemplate
+ BokehTemplate.register_sequence_constructor("my_tag", my_parser)
+
+ class myTool(BokehTemplate):
+ pass
+
+ myTool()
+ """
if tag.startswith("!"):
tag = tag[1:]
@@ -154,7 +234,35 @@ def user_constructor(loader, node):
user_constructor.__name__ = tag.lower() + "_constructor"
yaml.add_constructor("!" + tag, user_constructor)
- def register_mapping_constructor(self, tag, parse_func):
+ @classmethod
+ def register_mapping_constructor(cls, tag, parse_func):
+ """
+ Register a new mapping constructor with YAML.
+
+ Parameters
+ ----------
+ tag : str
+ The YAML tag string to be used for the constructor.
+ parse_func: object
+ The parsing function to be registered with YAML. This
+ function should accept a multi-line string, and return a
+ python object.
+
+ Usage
+ -----
+ This classmethod should be used to register a new constructor
+ *before* creating & instantiating a subclass of BokehTemplate :
+
+ ::
+
+ from bokeh_template import BokehTemplate
+ BokehTemplate.register_mapping_constructor("my_tag", my_parser)
+
+ class myTool(BokehTemplate):
+ pass
+
+ myTool()
+ """
if tag.startswith("!"):
tag = tag[1:]
@@ -163,3 +271,11 @@ def user_constructor(loader, node):
yield parse_func(value)
user_constructor.__name__ = tag.lower() + "_constructor"
yaml.add_constructor("!" + tag, user_constructor)
+
+
+class BokehTemplateEmbedError(Exception):
+ """A custom error for problems with embedding components."""
+
+
+class BokehTemplateParserError(Exception):
+ """A custom error for problems with parsing the interface files."""
diff --git a/jwql/database/database_interface.py b/jwql/database/database_interface.py
index 934f58c8c..855a5c67e 100755
--- a/jwql/database/database_interface.py
+++ b/jwql/database/database_interface.py
@@ -419,7 +419,10 @@ class : obj
MIRIBadPixelStats = monitor_orm_factory('miri_bad_pixel_stats')
NIRSpecBadPixelQueryHistory = monitor_orm_factory('nirspec_bad_pixel_query_history')
NIRSpecBadPixelStats = monitor_orm_factory('nirspec_bad_pixel_stats')
-
+NIRCamReadnoiseQueryHistory = monitor_orm_factory('nircam_readnoise_query_history')
+NIRCamReadnoiseStats = monitor_orm_factory('nircam_readnoise_stats')
+NIRISSReadnoiseQueryHistory = monitor_orm_factory('niriss_readnoise_query_history')
+NIRISSReadnoiseStats = monitor_orm_factory('niriss_readnoise_stats')
if __name__ == '__main__':
diff --git a/jwql/database/monitor_table_definitions/nircam/nircam_readnoise_query_history.txt b/jwql/database/monitor_table_definitions/nircam/nircam_readnoise_query_history.txt
new file mode 100644
index 000000000..c6deea152
--- /dev/null
+++ b/jwql/database/monitor_table_definitions/nircam/nircam_readnoise_query_history.txt
@@ -0,0 +1,8 @@
+INSTRUMENT, string
+APERTURE, string
+START_TIME_MJD, float
+END_TIME_MJD, float
+ENTRIES_FOUND, integer
+FILES_FOUND, integer
+RUN_MONITOR, bool
+ENTRY_DATE, datetime
\ No newline at end of file
diff --git a/jwql/database/monitor_table_definitions/nircam/nircam_readnoise_stats.txt b/jwql/database/monitor_table_definitions/nircam/nircam_readnoise_stats.txt
new file mode 100644
index 000000000..d6cebf4c2
--- /dev/null
+++ b/jwql/database/monitor_table_definitions/nircam/nircam_readnoise_stats.txt
@@ -0,0 +1,35 @@
+UNCAL_FILENAME, string
+APERTURE, string
+DETECTOR, string
+SUBARRAY, string
+READ_PATTERN, string
+NINTS, string
+NGROUPS, string
+EXPSTART, string
+READNOISE_FILENAME, string
+FULL_IMAGE_MEAN, float
+FULL_IMAGE_STDDEV, float
+FULL_IMAGE_N, float_array_1d
+FULL_IMAGE_BIN_CENTERS, float_array_1d
+READNOISE_DIFF_IMAGE, string
+DIFF_IMAGE_MEAN, float
+DIFF_IMAGE_STDDEV, float
+DIFF_IMAGE_N, float_array_1d
+DIFF_IMAGE_BIN_CENTERS, float_array_1d
+ENTRY_DATE, datetime
+AMP1_MEAN, float
+AMP1_STDDEV, float
+AMP1_N, float_array_1d
+AMP1_BIN_CENTERS, float_array_1d
+AMP2_MEAN, float
+AMP2_STDDEV, float
+AMP2_N, float_array_1d
+AMP2_BIN_CENTERS, float_array_1d
+AMP3_MEAN, float
+AMP3_STDDEV, float
+AMP3_N, float_array_1d
+AMP3_BIN_CENTERS, float_array_1d
+AMP4_MEAN, float
+AMP4_STDDEV, float
+AMP4_N, float_array_1d
+AMP4_BIN_CENTERS, float_array_1d
\ No newline at end of file
diff --git a/jwql/database/monitor_table_definitions/niriss/niriss_readnoise_query_history.txt b/jwql/database/monitor_table_definitions/niriss/niriss_readnoise_query_history.txt
new file mode 100644
index 000000000..c6deea152
--- /dev/null
+++ b/jwql/database/monitor_table_definitions/niriss/niriss_readnoise_query_history.txt
@@ -0,0 +1,8 @@
+INSTRUMENT, string
+APERTURE, string
+START_TIME_MJD, float
+END_TIME_MJD, float
+ENTRIES_FOUND, integer
+FILES_FOUND, integer
+RUN_MONITOR, bool
+ENTRY_DATE, datetime
\ No newline at end of file
diff --git a/jwql/database/monitor_table_definitions/niriss/niriss_readnoise_stats.txt b/jwql/database/monitor_table_definitions/niriss/niriss_readnoise_stats.txt
new file mode 100644
index 000000000..d6cebf4c2
--- /dev/null
+++ b/jwql/database/monitor_table_definitions/niriss/niriss_readnoise_stats.txt
@@ -0,0 +1,35 @@
+UNCAL_FILENAME, string
+APERTURE, string
+DETECTOR, string
+SUBARRAY, string
+READ_PATTERN, string
+NINTS, string
+NGROUPS, string
+EXPSTART, string
+READNOISE_FILENAME, string
+FULL_IMAGE_MEAN, float
+FULL_IMAGE_STDDEV, float
+FULL_IMAGE_N, float_array_1d
+FULL_IMAGE_BIN_CENTERS, float_array_1d
+READNOISE_DIFF_IMAGE, string
+DIFF_IMAGE_MEAN, float
+DIFF_IMAGE_STDDEV, float
+DIFF_IMAGE_N, float_array_1d
+DIFF_IMAGE_BIN_CENTERS, float_array_1d
+ENTRY_DATE, datetime
+AMP1_MEAN, float
+AMP1_STDDEV, float
+AMP1_N, float_array_1d
+AMP1_BIN_CENTERS, float_array_1d
+AMP2_MEAN, float
+AMP2_STDDEV, float
+AMP2_N, float_array_1d
+AMP2_BIN_CENTERS, float_array_1d
+AMP3_MEAN, float
+AMP3_STDDEV, float
+AMP3_N, float_array_1d
+AMP3_BIN_CENTERS, float_array_1d
+AMP4_MEAN, float
+AMP4_STDDEV, float
+AMP4_N, float_array_1d
+AMP4_BIN_CENTERS, float_array_1d
\ No newline at end of file
diff --git a/jwql/instrument_monitors/common_monitors/readnoise_monitor.py b/jwql/instrument_monitors/common_monitors/readnoise_monitor.py
new file mode 100644
index 000000000..31076126a
--- /dev/null
+++ b/jwql/instrument_monitors/common_monitors/readnoise_monitor.py
@@ -0,0 +1,618 @@
+#! /usr/bin/env python
+
+"""This module contains code for the readnoise monitor, which monitors
+the readnoise levels in dark exposures as well as the accuracy of
+the pipeline readnoise reference files over time.
+
+For each instrument, the readnoise, technically the correlated double
+sampling (CDS) noise, is found by calculating the standard deviation
+through a stack of consecutive frame differences in each dark exposure.
+The sigma-clipped mean and standard deviation in each of these readnoise
+images, as well as histogram distributions, are recorded in the
+``ReadnoiseStats`` database table.
+
+Next, each of these readnoise images are differenced with the current
+pipeline readnoise reference file to identify the need for new reference
+files. A histogram distribution of these difference images, as well as
+the sigma-clipped mean and standard deviation, are recorded in the
+``ReadnoiseStats`` database table. A png version of these
+difference images is also saved for visual inspection.
+
+Author
+------
+ - Ben Sunnquist
+
+Use
+---
+ This module can be used from the command line as such:
+
+ ::
+
+ python readnoise_monitor.py
+"""
+
+from collections import OrderedDict
+import datetime
+import logging
+import os
+import shutil
+
+from astropy.io import fits
+from astropy.stats import sigma_clip
+from astropy.time import Time
+from astropy.visualization import ZScaleInterval
+import crds
+from jwst.dq_init import DQInitStep
+from jwst.group_scale import GroupScaleStep
+from jwst.refpix import RefPixStep
+from jwst.superbias import SuperBiasStep
+import matplotlib
+matplotlib.use('Agg')
+import matplotlib.pyplot as plt
+import numpy as np
+from pysiaf import Siaf
+from sqlalchemy.sql.expression import and_
+
+from jwql.database.database_interface import session
+from jwql.database.database_interface import NIRCamReadnoiseQueryHistory, NIRCamReadnoiseStats, NIRISSReadnoiseQueryHistory, NIRISSReadnoiseStats
+from jwql.instrument_monitors import pipeline_tools
+from jwql.instrument_monitors.common_monitors.dark_monitor import mast_query_darks
+from jwql.utils import instrument_properties
+from jwql.utils.constants import JWST_INSTRUMENT_NAMES_MIXEDCASE
+from jwql.utils.logging_functions import log_info, log_fail
+from jwql.utils.permissions import set_permissions
+from jwql.utils.utils import ensure_dir_exists, filesystem_path, get_config, initialize_instrument_monitor, update_monitor_table
+
+class Readnoise():
+ """Class for executing the readnoise monitor.
+
+ This class will search for new dark current files in the file
+ system for each instrument and will run the monitor on these
+ files. The monitor will create a readnoise image for each of the
+ new dark files. It will then perform statistical measurements
+ on these readnoise images, as well as their differences with the
+ current pipeline readnoise reference file, in order to monitor
+ the readnoise levels over time as well as ensure the pipeline
+ readnoise reference file is sufficiently capturing the current
+ readnoise behavior. Results are all saved to database tables.
+
+ Attributes
+ ----------
+ output_dir : str
+ Path into which outputs will be placed.
+
+ data_dir : str
+ Path into which new dark files will be copied to be worked on.
+
+ query_start : float
+ MJD start date to use for querying MAST.
+
+ query_end : float
+ MJD end date to use for querying MAST.
+
+ instrument : str
+ Name of instrument used to collect the dark current data.
+
+ aperture : str
+ Name of the aperture used for the dark current (e.g.
+ ``NRCA1_FULL``).
+ """
+
+ def __init__(self):
+ """Initialize an instance of the ``Readnoise`` class."""
+
+ def determine_pipeline_steps(self):
+ """Determines the necessary JWST pipelines steps to run on a
+ given dark file.
+
+ Returns
+ -------
+ pipeline_steps : collections.OrderedDict
+ The pipeline steps to run.
+ """
+
+ pipeline_steps = OrderedDict({})
+
+ # Determine if the file needs group_scale step run
+ if self.read_pattern not in pipeline_tools.GROUPSCALE_READOUT_PATTERNS:
+ pipeline_steps['group_scale'] = False
+ else:
+ pipeline_steps['group_scale'] = True
+
+ # Run the DQ step on all files
+ pipeline_steps['dq_init'] = True
+
+ # Only run the superbias step for NIR instruments
+ if self.instrument.upper() != 'MIRI':
+ pipeline_steps['superbias'] = True
+ else:
+ pipeline_steps['superbias'] = False
+
+ # Run the refpix step on all files
+ pipeline_steps['refpix'] = True
+
+ return pipeline_steps
+
+ def file_exists_in_database(self, filename):
+ """Checks if an entry for filename exists in the readnoise stats
+ database.
+
+ Parameters
+ ----------
+ filename : str
+ The full path to the uncal filename.
+
+ Returns
+ -------
+ file_exists : bool
+ ``True`` if filename exists in the readnoise stats database.
+ """
+
+ query = session.query(self.stats_table)
+ results = query.filter(self.stats_table.uncal_filename == filename).all()
+
+ if len(results) != 0:
+ file_exists = True
+ else:
+ file_exists = False
+
+ return file_exists
+
+ def get_amp_stats(self, image, amps):
+ """Calculates the sigma-clipped mean and stddev, as well as the
+ histogram stats in the input image for each amplifier.
+
+ Parameters
+ ----------
+ image : numpy.ndarray
+ 2D array on which to calculate statistics.
+
+ amps : dict
+ Dictionary containing amp boundary coordinates (output from
+ ``amplifier_info`` function)
+ ``amps[key] = [(xmin, xmax, xstep), (ymin, ymax, ystep)]``
+
+ Returns
+ -------
+ amp_stats : dict
+ Contains the image statistics for each amp.
+ """
+
+ amp_stats = {}
+
+ for key in amps:
+ x_start, x_end, x_step = amps[key][0]
+ y_start, y_end, y_step = amps[key][1]
+
+ # Find sigma-clipped mean/stddev values for this amp
+ amp_data = image[y_start: y_end: y_step, x_start: x_end: x_step]
+ clipped = sigma_clip(amp_data, sigma=3.0, maxiters=5)
+ amp_stats['amp{}_mean'.format(key)] = np.nanmean(clipped)
+ amp_stats['amp{}_stddev'.format(key)] = np.nanstd(clipped)
+
+ # Find the histogram stats for this amp
+ n, bin_centers = self.make_histogram(amp_data)
+ amp_stats['amp{}_n'.format(key)] = n
+ amp_stats['amp{}_bin_centers'.format(key)] = bin_centers
+
+ return amp_stats
+
+ def get_metadata(self, filename):
+ """Collect basic metadata from a fits file.
+
+ Parameters
+ ----------
+ filename : str
+ Name of fits file to examine.
+ """
+
+ header = fits.getheader(filename)
+
+ try:
+ self.detector = header['DETECTOR']
+ self.read_pattern = header['READPATT']
+ self.subarray = header['SUBARRAY']
+ self.nints = header['NINTS']
+ self.ngroups = header['NGROUPS']
+ self.substrt1 = header['SUBSTRT1']
+ self.substrt2 = header['SUBSTRT2']
+ self.subsize1 = header['SUBSIZE1']
+ self.subsize2 = header['SUBSIZE2']
+ self.date_obs = header['DATE-OBS']
+ self.time_obs = header['TIME-OBS']
+ self.expstart = '{}T{}'.format(self.date_obs, self.time_obs)
+ except KeyError as e:
+ logging.error(e)
+
+ def identify_tables(self):
+ """Determine which database tables to use for a run of the
+ readnoise monitor.
+ """
+
+ mixed_case_name = JWST_INSTRUMENT_NAMES_MIXEDCASE[self.instrument]
+ self.query_table = eval('{}ReadnoiseQueryHistory'.format(mixed_case_name))
+ self.stats_table = eval('{}ReadnoiseStats'.format(mixed_case_name))
+
+ def image_to_png(self, image, outname):
+ """Outputs an image array into a png file.
+
+ Parameters
+ ----------
+ image : numpy.ndarray
+ 2D image array.
+
+ outname : str
+ The name given to the output png file.
+
+ Returns
+ -------
+ output_filename : str
+ The full path to the output png file.
+ """
+
+ output_filename = os.path.join(self.data_dir, '{}.png'.format(outname))
+
+ # Get image scale limits
+ zscale = ZScaleInterval()
+ vmin, vmax = zscale.get_limits(image)
+
+ # Plot the image
+ plt.figure(figsize=(12,12))
+ im = plt.imshow(image, cmap='gray', origin='lower', vmin=vmin, vmax=vmax)
+ plt.colorbar(im, label='Readnoise Difference (most recent dark - reffile) [DN]')
+ plt.title('{}'.format(outname))
+
+ # Save the figure
+ plt.savefig(output_filename, bbox_inches='tight', dpi=200, overwrite=True)
+ set_permissions(output_filename)
+ logging.info('\t{} created'.format(output_filename))
+
+ return output_filename
+
+ def make_crds_parameter_dict(self):
+ """Construct a paramter dictionary to be used for querying CRDS
+ for the current reffiles in use by the JWST pipeline.
+
+ Returns
+ -------
+ parameters : dict
+ Dictionary of parameters, in the format expected by CRDS.
+ """
+
+ parameters = {}
+ parameters['INSTRUME'] = self.instrument.upper()
+ parameters['DETECTOR'] = self.detector.upper()
+ parameters['READPATT'] = self.read_pattern.upper()
+ parameters['SUBARRAY'] = self.subarray.upper()
+ parameters['DATE-OBS'] = datetime.date.today().isoformat()
+ current_date = datetime.datetime.now()
+ parameters['TIME-OBS'] = current_date.time().isoformat()
+
+ return parameters
+
+ def make_histogram(self, data):
+ """Creates a histogram of the input data and returns the bin
+ centers and the counts in each bin.
+
+ Parameters
+ ----------
+ data : numpy.ndarray
+ The input data.
+
+ Returns
+ -------
+ counts : numpy.ndarray
+ The counts in each histogram bin.
+
+ bin_centers : numpy.ndarray
+ The histogram bin centers.
+ """
+
+ # Calculate the histogram range as that within 5 sigma from the mean
+ data = data.flatten()
+ clipped = sigma_clip(data, sigma=3.0, maxiters=5)
+ mean, stddev = np.nanmean(clipped), np.nanstd(clipped)
+ lower_thresh, upper_thresh = mean - 4 * stddev, mean + 4 * stddev
+
+ # Some images, e.g. readnoise images, will never have values below zero
+ if (lower_thresh < 0) & (len(data[data < 0]) == 0):
+ lower_thresh = 0.0
+
+ # Make the histogram
+ counts, bin_edges = np.histogram(data, bins='auto', range=(lower_thresh, upper_thresh))
+ bin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2
+
+ return counts, bin_centers
+
+ def make_readnoise_image(self, data):
+ """Calculates the readnoise for the given input dark current
+ ramp.
+
+ Parameters
+ ----------
+ data : numpy.ndarray
+ The input ramp data. The data shape is assumed to be a 4D
+ array in DMS format (integration, group, y, x).
+
+ Returns
+ -------
+ readnoise : numpy.ndarray
+ The 2D readnoise image.
+ """
+
+ # Create a stack of correlated double sampling (CDS) images using the input
+ # ramp data, combining multiple integrations if necessary.
+ logging.info('\tCreating stack of CDS difference frames')
+ num_ints, num_groups, num_y, num_x = data.shape
+ for integration in range(num_ints):
+ if num_groups % 2 == 0:
+ cds = data[integration, 1::2, :, :] - data[integration, ::2, :, :]
+ else:
+ # Omit the last group if the number of groups is odd
+ cds = data[integration, 1::2, :, :] - data[integration, ::2, :, :][:-1]
+
+ if integration == 0:
+ cds_stack = cds
+ else:
+ cds_stack = np.concatenate((cds_stack, cds), axis=0)
+
+ # Calculate the readnoise by taking the clipped stddev through the CDS stack
+ logging.info('\tCreating readnoise image')
+ clipped = sigma_clip(cds_stack, sigma=3.0, maxiters=3, axis=0)
+ readnoise = np.std(clipped, axis=0)
+ readnoise = readnoise.filled(fill_value=np.nan) # converts masked array to normal array and fills missing data
+
+ return readnoise
+
+ def most_recent_search(self):
+ """Query the query history database and return the information
+ on the most recent query for the given ``aperture_name`` where
+ the readnoise monitor was executed.
+
+ Returns
+ -------
+ query_result : float
+ Date (in MJD) of the ending range of the previous MAST query
+ where the readnoise monitor was run.
+ """
+
+ query = session.query(self.query_table).filter(and_(self.query_table.aperture==self.aperture,
+ self.query_table.run_monitor==True)).order_by(self.query_table.end_time_mjd).all()
+
+ if len(query) == 0:
+ query_result = 57357.0 # a.k.a. Dec 1, 2015 == CV3
+ logging.info(('\tNo query history for {}. Beginning search date will be set to {}.'.format(self.aperture, query_result)))
+ else:
+ query_result = query[-1].end_time_mjd
+
+ return query_result
+
+ def process(self, file_list):
+ """The main method for processing darks. See module docstrings
+ for further details.
+
+ Parameters
+ ----------
+ file_list : list
+ List of filenames (including full paths) to the dark current
+ files.
+ """
+
+ for filename in file_list:
+ logging.info('\tWorking on file: {}'.format(filename))
+
+ # Get relevant header information for this file
+ self.get_metadata(filename)
+
+ # Run the file through the necessary pipeline steps
+ pipeline_steps = self.determine_pipeline_steps()
+ logging.info('\tRunning pipeline on {}'.format(filename))
+ try:
+ processed_file = pipeline_tools.run_calwebb_detector1_steps(filename, pipeline_steps)
+ logging.info('\tPipeline complete. Output: {}'.format(processed_file))
+ set_permissions(processed_file)
+ except:
+ logging.info('\tPipeline processing failed for {}'.format(filename))
+ continue
+
+ # Find amplifier boundaries so per-amp statistics can be calculated
+ _, amp_bounds = instrument_properties.amplifier_info(processed_file, omit_reference_pixels=True)
+ logging.info('\tAmplifier boundaries: {}'.format(amp_bounds))
+
+ # Get the ramp data; remove first 5 groups and last group for MIRI to avoid reset/rscd effects
+ cal_data = fits.getdata(processed_file, 'SCI', uint=False)
+ if self.instrument == 'MIRI':
+ cal_data = cal_data[:, 5:-1, :, :]
+
+ # Make the readnoise image
+ readnoise_outfile = os.path.join(self.data_dir, os.path.basename(processed_file.replace('.fits', '_readnoise.fits')))
+ readnoise = self.make_readnoise_image(cal_data)
+ fits.writeto(readnoise_outfile, readnoise, overwrite=True)
+ logging.info('\tReadnoise image saved to {}'.format(readnoise_outfile))
+
+ # Calculate the full image readnoise stats
+ clipped = sigma_clip(readnoise, sigma=3.0, maxiters=5)
+ full_image_mean, full_image_stddev = np.nanmean(clipped), np.nanstd(clipped)
+ full_image_n, full_image_bin_centers = self.make_histogram(readnoise)
+ logging.info('\tReadnoise image stats: {:.5f} +/- {:.5f}'.format(full_image_mean, full_image_stddev))
+
+ # Calculate readnoise stats in each amp separately
+ amp_stats = self.get_amp_stats(readnoise, amp_bounds)
+ logging.info('\tReadnoise image stats by amp: {}'.format(amp_stats))
+
+ # Get the current JWST Readnoise Reference File data
+ parameters = self.make_crds_parameter_dict()
+ reffile_mapping = crds.getreferences(parameters, reftypes=['readnoise'])
+ readnoise_file = reffile_mapping['readnoise']
+ if 'NOT FOUND' in readnoise_file:
+ logging.warning('\tNo pipeline readnoise reffile match for this file - assuming all zeros.')
+ pipeline_readnoise = np.zeros(readnoise.shape)
+ else:
+ logging.info('\tPipeline readnoise reffile is {}'.format(readnoise_file))
+ pipeline_readnoise = fits.getdata(readnoise_file)
+
+ # Find the difference between the current readnoise image and the pipeline readnoise reffile, and record image stats.
+ # Sometimes, the pipeline readnoise reffile needs to be cutout to match the subarray.
+ pipeline_readnoise = pipeline_readnoise[self.substrt2-1:self.substrt2+self.subsize2-1, self.substrt1-1:self.substrt1+self.subsize1-1]
+ readnoise_diff = readnoise - pipeline_readnoise
+ clipped = sigma_clip(readnoise_diff, sigma=3.0, maxiters=5)
+ diff_image_mean, diff_image_stddev = np.nanmean(clipped), np.nanstd(clipped)
+ diff_image_n, diff_image_bin_centers = self.make_histogram(readnoise_diff)
+ logging.info('\tReadnoise difference image stats: {:.5f} +/- {:.5f}'.format(diff_image_mean, diff_image_stddev))
+
+ # Save a png of the readnoise difference image for visual inspection
+ logging.info('\tCreating png of readnoise difference image')
+ readnoise_diff_png = self.image_to_png(readnoise_diff, outname=os.path.basename(readnoise_outfile).replace('.fits', '_diff'))
+
+ # Construct new entry for this file for the readnoise database table.
+ # Can't insert values with numpy.float32 datatypes into database
+ # so need to change the datatypes of these values.
+ readnoise_db_entry = {'uncal_filename': filename,
+ 'aperture': self.aperture,
+ 'detector': self.detector,
+ 'subarray': self.subarray,
+ 'read_pattern': self.read_pattern,
+ 'nints': self.nints,
+ 'ngroups': self.ngroups,
+ 'expstart': self.expstart,
+ 'readnoise_filename': readnoise_outfile,
+ 'full_image_mean': float(full_image_mean),
+ 'full_image_stddev': float(full_image_stddev),
+ 'full_image_n': full_image_n.astype(float),
+ 'full_image_bin_centers': full_image_bin_centers.astype(float),
+ 'readnoise_diff_image': readnoise_diff_png,
+ 'diff_image_mean': float(diff_image_mean),
+ 'diff_image_stddev': float(diff_image_stddev),
+ 'diff_image_n': diff_image_n.astype(float),
+ 'diff_image_bin_centers': diff_image_bin_centers.astype(float),
+ 'entry_date': datetime.datetime.now()
+ }
+ for key in amp_stats.keys():
+ if isinstance(amp_stats[key], (int, float)):
+ readnoise_db_entry[key] = float(amp_stats[key])
+ else:
+ readnoise_db_entry[key] = amp_stats[key].astype(float)
+
+ # Add this new entry to the readnoise database table
+ self.stats_table.__table__.insert().execute(readnoise_db_entry)
+ logging.info('\tNew entry added to readnoise database table')
+
+ # Remove the raw and calibrated files to save memory space
+ os.remove(filename)
+ os.remove(processed_file)
+
+ @log_fail
+ @log_info
+ def run(self):
+ """The main method. See module docstrings for further
+ details.
+ """
+
+ logging.info('Begin logging for readnoise_monitor\n')
+
+ # Get the output directory and setup a directory to store the data
+ self.output_dir = os.path.join(get_config()['outputs'], 'readnoise_monitor')
+ ensure_dir_exists(os.path.join(self.output_dir, 'data'))
+
+ # Use the current time as the end time for MAST query
+ self.query_end = Time.now().mjd
+
+ # Loop over all instruments
+ for instrument in ['nircam', 'niriss']:
+ self.instrument = instrument
+
+ # Identify which database tables to use
+ self.identify_tables()
+
+ # Get a list of all possible apertures for this instrument
+ siaf = Siaf(self.instrument)
+ possible_apertures = list(siaf.apertures)
+
+ for aperture in possible_apertures:
+
+ logging.info('\nWorking on aperture {} in {}'.format(aperture, instrument))
+ self.aperture = aperture
+
+ # Locate the record of the most recent MAST search; use this time
+ # (plus a 30 day buffer to catch any missing files from the previous
+ # run) as the start time in the new MAST search.
+ most_recent_search = self.most_recent_search()
+ self.query_start = most_recent_search - 30
+
+ # Query MAST for new dark files for this instrument/aperture
+ logging.info('\tQuery times: {} {}'.format(self.query_start, self.query_end))
+ new_entries = mast_query_darks(instrument, aperture, self.query_start, self.query_end)
+ logging.info('\tAperture: {}, new entries: {}'.format(self.aperture, len(new_entries)))
+
+ # Set up a directory to store the data for this aperture
+ self.data_dir = os.path.join(self.output_dir, 'data/{}_{}'.format(self.instrument.lower(), self.aperture.lower()))
+ if len(new_entries) > 0:
+ ensure_dir_exists(self.data_dir)
+
+ # Get any new files to process
+ new_files = []
+ checked_files = []
+ for file_entry in new_entries:
+ output_filename = os.path.join(self.data_dir, file_entry['filename'].replace('_dark', '_uncal'))
+
+ # Sometimes both the dark and uncal name of a file is picked up in new_entries
+ if output_filename in checked_files:
+ logging.info('\t{} already checked in this run.'.format(output_filename))
+ continue
+ checked_files.append(output_filename)
+
+ # Dont process files that already exist in the readnoise stats database
+ file_exists = self.file_exists_in_database(output_filename)
+ if file_exists:
+ logging.info('\t{} already exists in the readnoise database table.'.format(output_filename))
+ continue
+
+ # Save any new uncal files with enough groups in the output directory; some dont exist in JWQL filesystem
+ try:
+ filename = filesystem_path(file_entry['filename'])
+ uncal_filename = filename.replace('_dark', '_uncal')
+ if not os.path.isfile(uncal_filename):
+ logging.info('\t{} does not exist in JWQL filesystem, even though {} does'.format(uncal_filename, filename))
+ else:
+ num_groups = fits.getheader(uncal_filename)['NGROUPS']
+ if num_groups > 1: # skip processing if the file doesnt have enough groups to calculate the readnoise; TODO change to 10 before incorporating MIRI
+ shutil.copy(uncal_filename, self.data_dir)
+ logging.info('\tCopied {} to {}'.format(uncal_filename, output_filename))
+ set_permissions(output_filename)
+ new_files.append(output_filename)
+ else:
+ logging.info('\tNot enough groups to calculate readnoise in {}'.format(uncal_filename))
+ except FileNotFoundError:
+ logging.info('\t{} does not exist in JWQL filesystem'.format(file_entry['filename']))
+
+ # Run the readnoise monitor on any new files
+ if len(new_files) > 0:
+ self.process(new_files)
+ monitor_run = True
+ else:
+ logging.info('\tReadnoise monitor skipped. {} new dark files for {}, {}.'.format(len(new_files), instrument, aperture))
+ monitor_run = False
+
+ # Update the query history
+ new_entry = {'instrument': instrument,
+ 'aperture': aperture,
+ 'start_time_mjd': self.query_start,
+ 'end_time_mjd': self.query_end,
+ 'entries_found': len(new_entries),
+ 'files_found': len(new_files),
+ 'run_monitor': monitor_run,
+ 'entry_date': datetime.datetime.now()}
+ self.query_table.__table__.insert().execute(new_entry)
+ logging.info('\tUpdated the query history table')
+
+ logging.info('Readnoise Monitor completed successfully.')
+
+if __name__ == '__main__':
+
+ module = os.path.basename(__file__).strip('.py')
+ start_time, log_file = initialize_instrument_monitor(module)
+
+ monitor = Readnoise()
+ monitor.run()
+
+ update_monitor_table(module, start_time, log_file)
diff --git a/jwql/utils/constants.py b/jwql/utils/constants.py
index d1cc0654d..6455f4107 100644
--- a/jwql/utils/constants.py
+++ b/jwql/utils/constants.py
@@ -202,3 +202,4 @@
FILE_SUFFIX_TYPES = GUIDER_SUFFIX_TYPES + GENERIC_SUFFIX_TYPES + \
TIME_SERIES_SUFFIX_TYPES + NIRCAM_CORONAGRAPHY_SUFFIX_TYPES + \
NIRISS_AMI_SUFFIX_TYPES
+
diff --git a/jwql/website/apps/jwql/data_containers.py b/jwql/website/apps/jwql/data_containers.py
index 5e1fd2c82..a1d74bf3b 100644
--- a/jwql/website/apps/jwql/data_containers.py
+++ b/jwql/website/apps/jwql/data_containers.py
@@ -32,6 +32,8 @@
from astropy.time import Time
from django.conf import settings
import numpy as np
+from operator import itemgetter
+
# astroquery.mast import that depends on value of auth_mast
# this import has to be made before any other import of astroquery.mast
@@ -46,6 +48,7 @@
from jwedb.edb_interface import mnemonic_inventory
from jwql.database import database_interface as di
+from jwql.database.database_interface import load_connection
from jwql.edb.engineering_database import get_mnemonic, get_mnemonic_info
from jwql.instrument_monitors.miri_monitors.data_trending import dashboard as miri_dash
from jwql.instrument_monitors.nirspec_monitors.data_trending import dashboard as nirspec_dash
@@ -790,6 +793,48 @@ def get_proposal_info(filepaths):
return proposal_info
+def get_jwqldb_table_view_components(request):
+ """Renders view for JWQLDB table viewer.
+
+ Parameters
+ ----------
+ request : HttpRequest object
+ Incoming request from the webpage
+
+ Returns
+ -------
+ None
+ """
+
+ if request.method == 'POST':
+ # Make dictionary of tablename : class object
+ # This matches what the user selects in the drop down to the python obj.
+ tables_of_interest = {}
+ for item in di.__dict__.keys():
+ table = getattr(di, item)
+ if hasattr(table, '__tablename__'):
+ tables_of_interest[table.__tablename__] = table
+
+ session, base, engine, meta = load_connection(get_config()['connection_string'])
+ tablename_from_dropdown = request.POST['db_table_select']
+ table_object = tables_of_interest[tablename_from_dropdown] # Select table object
+
+ result = session.query(table_object)
+
+ result_dict = [row.__dict__ for row in result.all()] # Turn query result into list of dicts
+ column_names = table_object.__table__.columns.keys()
+
+ # Build list of column data based on column name.
+ data = []
+ for column in column_names:
+ column_data = list(map(itemgetter(column), result_dict))
+ data.append(column_data)
+
+ # Build table.
+ table_to_display = Table(data, names=column_names)
+ table_to_display.show_in_browser(jsviewer=True, max_lines=-1) # Negative max_lines shows all lines avaliable.
+
+
def get_thumbnails_by_instrument(inst):
"""Return a list of thumbnails available in the filesystem for the
given instrument.
diff --git a/jwql/website/apps/jwql/templates/base.html b/jwql/website/apps/jwql/templates/base.html
index 1b7eefb06..592a59b72 100644
--- a/jwql/website/apps/jwql/templates/base.html
+++ b/jwql/website/apps/jwql/templates/base.html
@@ -131,6 +131,9 @@
{% endfor %}
EDB(current)
+
+
+ JWQLDB(current)
Documentation(current)
diff --git a/jwql/website/apps/jwql/templates/jwqldb_table_viewer.html b/jwql/website/apps/jwql/templates/jwqldb_table_viewer.html
new file mode 100644
index 000000000..8390222e2
--- /dev/null
+++ b/jwql/website/apps/jwql/templates/jwqldb_table_viewer.html
@@ -0,0 +1,34 @@
+{% extends "base.html" %}
+
+{% block preamble %}
+
+ Interactive Database Viewer - JWQL
+
+{% endblock %}
+
+{% block content %}
+
+
+ Explore JWQL database tables through the web browser
+
+
+ This page provides users to interactively explore the JWQL database tables with Astropy Tables Javascript Viewer. Simply select a table from the dropdown menu.
+
+
+
+
+
+{% endblock %}
\ No newline at end of file
diff --git a/jwql/website/apps/jwql/urls.py b/jwql/website/apps/jwql/urls.py
index d87921efb..7050ab073 100644
--- a/jwql/website/apps/jwql/urls.py
+++ b/jwql/website/apps/jwql/urls.py
@@ -74,6 +74,7 @@
path('about/', views.about, name='about'),
path('dashboard/', views.dashboard, name='dashboard'),
path('edb/', views.engineering_database, name='edb'),
+ path('table_viewer', views.jwqldb_table_viewer, name='table_viewer'),
re_path(r'^(?P({}))/$'.format(instruments), views.instrument, name='instrument'),
re_path(r'^(?P({}))/archive/$'.format(instruments), views.archived_proposals, name='archive'),
re_path(r'^(?P({}))/unlooked/$'.format(instruments), views.unlooked_images, name='unlooked'),
diff --git a/jwql/website/apps/jwql/views.py b/jwql/website/apps/jwql/views.py
index c57a2cb78..8ba5b28fa 100644
--- a/jwql/website/apps/jwql/views.py
+++ b/jwql/website/apps/jwql/views.py
@@ -49,11 +49,13 @@
from .data_containers import get_current_flagged_anomalies
from .data_containers import get_proposal_info
from .data_containers import random_404_page
+from .data_containers import get_jwqldb_table_view_components
from .data_containers import thumbnails_ajax
from .data_containers import data_trending
from .data_containers import nirspec_trending
from .forms import AnomalySubmitForm, FileSearchForm
from .oauth import auth_info, auth_required
+from jwql.database.database_interface import load_connection
from jwql.utils.constants import JWST_INSTRUMENT_NAMES, MONITORS, JWST_INSTRUMENT_NAMES_MIXEDCASE
from jwql.utils.utils import get_base_url, get_config
@@ -382,6 +384,37 @@ def instrument(request, inst):
return render(request, template, context)
+def jwqldb_table_viewer(request):
+ """Generate the JWQL Table Viewer view.
+
+ Parameters
+ ----------
+ request : HttpRequest object
+ Incoming request from the webpage
+
+ user : dict
+ A dictionary of user credentials.
+
+ Returns
+ -------
+ HttpResponse object
+ Outgoing response sent to the webpage
+ """
+
+ table_view_components = get_jwqldb_table_view_components(request)
+
+ session, base, engine, meta = load_connection(get_config()['connection_string'])
+ all_jwql_tables = engine.table_names()
+
+ template = 'jwqldb_table_viewer.html'
+ context = {
+ 'inst': '',
+ 'all_jwql_tables': all_jwql_tables,
+ 'table_view_components': table_view_components}
+
+ return render(request, template, context)
+
+
def not_found(request, *kwargs):
"""Generate a ``not_found`` page
diff --git a/style_guide/README.md b/style_guide/README.md
index 7a691fb44..cf05e8a79 100644
--- a/style_guide/README.md
+++ b/style_guide/README.md
@@ -11,7 +11,7 @@ It is assumed that the reader of this style guide has read and is familiar with
- The [PEP8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/)
- The [PEP257 Docstring Conventions Style Guide](https://www.python.org/dev/peps/pep-0257/)
-- The [`numpydoc` docstring convention](https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt)
+- The [`numpydoc` docstring convention](https://numpydoc.readthedocs.io/en/latest/format.html)
Workflow