Skip to content

Commit

Permalink
2.4.13 bugfix release (#838)
Browse files Browse the repository at this point in the history
* Import from within source tree overly restrictive (#819)

In order to avoid same-name module conflicts with the system module (such as io.py), phoebe shouldn't be imported from its own source tree. The original test eliminated the entire package tree, including all auxiliary directories, which is overly restrictive and it prevented pytest from running when phoebe was installed with `pip -e`. This PR fixes that by blocking imports from the root directory and from the root/phoebe tree (the actual sources). Closes #806.

* update tests for changes to pytest (#820)

* Fixing up all pytests
The new pytest version raises an error instead of warning when tests return a value, and when assert is used incorrectly. All tests have been touched up to not have issues with the latest pytest.

* Update f-string to be compatible with python 3.7

f-string format f'{var=}' has been introduced in python 3.8, and it caused the 3.7 tests to fail. This fixes that issue.

---------

Co-authored-by: Andrej Prsa <[email protected]>

* Dynamical RVs now avoid meshing (#823)

* Dynamical RVs now avoid meshing
Calling b.compute_pblums() built the mesh and treated dynamical RVs as mesh-dependent. This fixes that for both run_compute() and for direct compute_pblums(), compute_l3(), and compute_ld_coeffs bundle methods. A new function, b._datasets_that_require_meshing(), filters datasets that require a mesh, i.e. 'lc', 'lp' and flux-weighted 'rv'. Closes #812.

* Generalized dataset filtering with b._datasets_where()

* Adding backend exception to mesh computation
Even if phoebe backend didn't need meshes for a particular dataset, other backends do. This is now handled in the `_datasets_where()` function.


---------

Co-authored-by: Kyle Conroy <[email protected]>

* run_checks_compute() bugfix (#837)

* run_checks_compute() bugfix
The run_checks_compute() function included the internal "_default" compute parameter set in checking the ck2004 model atmosphere table existence in the passband files. This caused the checks to fail even if ck2004 was never needed.

* Incorporating @kecnry's comments into the PR (along with additional comments)

* Fixing the if statement per @kecnry's instructions.

* Fix treatment of distance for alternate backends (#832)

* regression test

* don't duplicate distance handling for alt backends

* temporarily skip jktebop

* use top-level default_binary in test

* version and changelog for 2.4.13 release

* update setup-python to v5

to avoid deprecation warning

---------

Co-authored-by: Andrej Prsa <[email protected]>
Co-authored-by: Andrej Prsa <[email protected]>
  • Loading branch information
3 people authored Apr 1, 2024
1 parent 8833cbf commit 01e109c
Show file tree
Hide file tree
Showing 53 changed files with 830 additions and 856 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/on_pr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
uses: actions/checkout@v4

- name: Setup python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

Expand Down
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,12 @@ To understand how to use PHOEBE, please consult the [tutorials, scripts and manu
CHANGELOG
----------

### 2.4.13

* optimization: dynamical RVs avoid unnecessary meshing
* run_checks no longer requires ck2004 atmosphere tables if no datasets use ck2004
* fix treatment of distance for alternate backends (ellc, jktebop)

### 2.4.12 - build system update

* upgrade the build system to pyproject.toml with setuptools as backend and pip as frontend.
Expand Down
8 changes: 4 additions & 4 deletions phoebe/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"""

__version__ = '2.4.12'
__version__ = '2.4.13'

import os as _os
import sys as _sys
Expand All @@ -26,9 +26,9 @@
import atexit
import re

# People shouldn't import Phoebe from the installation directory (inspired upon
# pymc warning message).
if _os.getcwd().find(_os.path.abspath(_os.path.split(_os.path.split(__file__)[0])[0]))>-1:
# People shouldn't import phoebe from the root directory or from root/phoebe:
_root_dir = _os.path.abspath(_os.path.split(_os.path.split(__file__)[0])[0])
if _os.getcwd() == _root_dir or _os.path.join(_root_dir, 'phoebe') in _os.getcwd():
# We have a clash of package name with the standard library: we implement an
# "io" module and also they do. This means that you can import Phoebe from its
# main source tree; then there is no difference between io from here and io
Expand Down
124 changes: 81 additions & 43 deletions phoebe/frontend/bundle.py
Original file line number Diff line number Diff line change
Expand Up @@ -3765,7 +3765,7 @@ def run_checks_compute(self, compute=None, solver=None, solution=None, figure=No

pb_needs_Inorm = True
pb_needs_Imu = True
pb_needs_ld = True #np.any([p.get_value()!='interp' for p in self.filter(qualifier='ld_mode', dataset=pbparam.dataset, context='dataset', **_skip_filter_checks).to_list()])
pb_needs_ld = True
pb_needs_ldint = True

missing_pb_content = []
Expand All @@ -3780,8 +3780,10 @@ def run_checks_compute(self, compute=None, solver=None, solution=None, figure=No
True, 'run_compute')

# NOTE: atms are not attached to datasets, but per-compute and per-component
for atmparam in self.filter(qualifier='atm', kind='phoebe', compute=computes, **_skip_filter_checks).to_list() + self.filter(qualifier='ld_coeffs_source').to_list():

# NOTE: atmparam includes a '_default' compute pset, which depends on the
# ck2004 atmospheres; the checks should not include it. This is achieved
# by filtering check_visible=False and check_default=True in the line below:
for atmparam in self.filter(qualifier='atm', kind='phoebe', compute=computes, check_visible=False, check_default=True).to_list() + self.filter(qualifier='ld_coeffs_source').to_list():
# check to make sure passband supports the selected atm
atm = atmparam.get_value(**_skip_filter_checks)
if atmparam.qualifier == 'ld_coeffs_source' and atm == 'auto':
Expand Down Expand Up @@ -10199,7 +10201,6 @@ def ui_figures(self, web_client=None, blocking=None):

return self._launch_ui(web_client, 'figures', blocking=blocking)


def compute_ld_coeffs(self, compute=None, set_value=False, **kwargs):
"""
Compute the interpolated limb darkening coefficients.
Expand Down Expand Up @@ -10246,19 +10247,25 @@ def compute_ld_coeffs(self, compute=None, set_value=False, **kwargs):
appropriate length given the respective value of `ld_func`).
"""

# check to make sure value of passed compute is valid
if compute is None:
if len(self.computes)==1:
if len(self.computes) == 1:
compute = self.computes[0]
else:
raise ValueError("must provide compute")
if not isinstance(compute, str):
raise TypeError("compute must be a single value (string)")

compute_ps = self.get_compute(compute, **_skip_filter_checks)
datasets = kwargs.pop('dataset') if 'dataset' in kwargs else self._datasets_where(compute=compute, mesh_needed=True)

# we'll add 'bol' to the list of default datasets... but only if bolometric is needed for irradiation
compute_ps = self.get_compute(compute, **_skip_filter_checks)
needs_bol = compute_ps.get_value(qualifier='irrad_method', irrad_method=kwargs.get('irrad_method', None), default='none', **_skip_filter_checks) != 'none'
if needs_bol:
datasets += ['bol']

datasets = kwargs.pop('dataset', self.datasets + ['bol'] if needs_bol else self.datasets)
if len(datasets) == 0:
return {}
components = kwargs.pop('component', self.components)

# don't allow things like model='mymodel', etc
Expand Down Expand Up @@ -10383,6 +10390,27 @@ def restore_conf():

return system

def _datasets_where(self, compute, mesh_needed=False, l3_needed=False):
datasets = self.filter(compute=compute, context='compute', qualifier='enabled', value=True, **_skip_filter_checks).datasets
ds_kinds = [self.filter(dataset=ds, context='dataset', **_skip_filter_checks).kind for ds in datasets]
backend = self.filter(compute=compute, context='compute', **_skip_filter_checks).kind

subset = []

if l3_needed:
subset += [ds for ds in datasets if len(self.filter(qualifier='l3_mode', dataset=ds, context='dataset', check_visible=True)) > 0]

if mesh_needed:
subset += [ds for ds, kind in zip(datasets, ds_kinds)
if kind == 'lc'
or kind == 'lp'
or (kind == 'rv' and backend != 'phoebe')
or (kind == 'rv' and len(self.filter(qualifier='rv_method', dataset=ds, compute=compute, value='flux-weighted', **_skip_filter_checks)) > 0)
]

# subset can have repeated entries; return unique occurrences:
return list(set(subset))

def compute_l3s(self, compute=None, use_pbfluxes={},
set_value=False, **kwargs):
"""
Expand Down Expand Up @@ -10426,12 +10454,6 @@ def compute_l3s(self, compute=None, use_pbfluxes={},
"""
logger.debug("b.compute_l3s")

datasets = kwargs.pop('dataset', self.filter('l3_mode', check_visible=True).datasets)
if isinstance(datasets, str):
datasets = [datasets]



if compute is None:
if len(self.computes)==1:
compute = self.computes[0]
Expand All @@ -10440,6 +10462,12 @@ def compute_l3s(self, compute=None, use_pbfluxes={},
if not isinstance(compute, str):
raise TypeError("compute must be a single value (string)")

# either take user-passed datasets or datasets that have an l3_mode:
datasets = kwargs.pop('dataset') if 'dataset' in kwargs else self._datasets_where(compute=compute, l3_needed=True)
if isinstance(datasets, str):
datasets = [datasets]

# make sure all parameters are up to date:
self.run_delayed_constraints()

datasets_need_pbflux = [d for d in datasets if d not in use_pbfluxes.keys()]
Expand All @@ -10457,10 +10485,9 @@ def compute_l3s(self, compute=None, use_pbfluxes={},
**kwargs)

# don't allow things like model='mymodel', etc
if not kwargs.get('skip_checks', False):
forbidden_keys = parameters._meta_fields_filter
compute_ps = self.get_compute(compute, **_skip_filter_checks)
self._kwargs_checks(kwargs, additional_allowed_keys=['system', 'skip_checks', 'ret_structured_dicts', 'pblum_method']+compute_ps.qualifiers, additional_forbidden_keys=forbidden_keys)
forbidden_keys = parameters._meta_fields_filter
compute_ps = self.get_compute(compute, **_skip_filter_checks)
self._kwargs_checks(kwargs, additional_allowed_keys=['system', 'skip_checks', 'ret_structured_dicts', 'pblum_method']+compute_ps.qualifiers, additional_forbidden_keys=forbidden_keys)

ret_structured_dicts = kwargs.get('ret_structured_dicts', False)
l3s = {}
Expand Down Expand Up @@ -10620,7 +10647,26 @@ def compute_pblums(self, compute=None, model=None, pblum=True, pblum_abs=False,
"""
logger.debug("b.compute_pblums")

datasets = kwargs.pop('dataset', self.filter(qualifier='passband').datasets)
# check to make sure value of passed compute is valid
if compute is None:
if len(self.computes) == 1:
compute = self.computes[0]
else:
raise ValueError("must provide compute")
if not isinstance(compute, str):
raise TypeError("compute must be a single value (string)")

compute_ps = self.get_compute(compute=compute, **_skip_filter_checks)
ret_structured_dicts = kwargs.get('ret_structured_dicts', False)

# either take user-passed datasets or datasets that require a mesh:
datasets = kwargs.pop('dataset') if 'dataset' in kwargs else self._datasets_where(compute=compute, mesh_needed=True)

if len(datasets) == 0:
if ret_structured_dicts:
return None, {}, {}, {}, {}
return {}

if isinstance(datasets, str):
datasets = [datasets]

Expand All @@ -10638,16 +10684,6 @@ def compute_pblums(self, compute=None, model=None, pblum=True, pblum_abs=False,
else:
components = valid_components

# check to make sure value of passed compute is valid
if compute is None:
if len(self.computes)==1:
compute = self.computes[0]
else:
raise ValueError("must provide compute")
if not isinstance(compute, str):
raise TypeError("compute must be a single value (string)")

compute_ps = self.get_compute(compute=compute, **_skip_filter_checks)
# NOTE: this is flipped so that stefan-boltzmann can manually be used even if the compute-options have kind='phoebe' and don't have that choice
pblum_method = kwargs.pop('pblum_method', compute_ps.get_value(qualifier='pblum_method', default='phoebe', **_skip_filter_checks))
t0 = self.get_value(qualifier='t0', context='system', unit=u.d, t0=kwargs.pop('t0', None), **_skip_filter_checks)
Expand Down Expand Up @@ -10716,7 +10752,6 @@ def compute_pblums(self, compute=None, model=None, pblum=True, pblum_abs=False,
else:
raise ValueError("pblum_method='{}' not supported".format(pblum_method))

ret_structured_dicts = kwargs.get('ret_structured_dicts', False)
ret = {}

# pblum_*: {dataset: {component: value}}
Expand Down Expand Up @@ -11666,6 +11701,7 @@ def run_compute(self, compute=None, model=None, solver=None,
* ValueError: if any given dataset is enabled in more than one set of
compute options sent to run_compute.
"""

# NOTE: if we're already in client mode, we'll never get here in the client
# there detach is handled slightly differently (see parameters._send_if_client)
if isinstance(detach, str):
Expand Down Expand Up @@ -11713,6 +11749,7 @@ def run_compute(self, compute=None, model=None, solver=None,
# NOTE: _prepare_compute calls run_checks_compute and will handle raising
# any necessary errors
model, computes, datasets, do_create_fig_params, changed_params, overwrite_ps, kwargs = self._prepare_compute(compute, model, dataset, from_export=False, **kwargs)

_ = kwargs.pop('do_create_fig_params', None)

if use_server is None:
Expand Down Expand Up @@ -11912,8 +11949,7 @@ def restore_conf():
# TODO: have this return a dictionary like pblums/l3s that we can pass on to the backend?

# we need to check both for enabled but also passed via dataset kwarg
ds_kinds_enabled = self.filter(dataset=dataset_this_compute, context='dataset', **_skip_filter_checks).kinds
if 'lc' in ds_kinds_enabled or 'rv' in ds_kinds_enabled or 'lp' in ds_kinds_enabled:
if len(self._datasets_where(compute=compute, mesh_needed=True)) > 0:
logger.info("run_compute: computing necessary ld_coeffs, pblums, l3s")
self.compute_ld_coeffs(compute=compute, skip_checks=True, set_value=True, **{k:v for k,v in kwargs.items() if k in computeparams.qualifiers})
# NOTE that if pblum_method != 'phoebe', then system will be None
Expand Down Expand Up @@ -12090,20 +12126,22 @@ def _scale_fluxes_cfit(fluxes, scale_factor):
logger.debug("applying scale_factor={} to {} parameter in mesh".format(scale_factor, mesh_param.qualifier))
mesh_param.set_value(mesh_param.get_value()*scale_factor, ignore_readonly=True)

# handle flux scaling based on distance and l3
# NOTE: this must happen AFTER dataset scaling
distance = self.get_value(qualifier='distance', context='system', unit=u.m, **_skip_filter_checks)
for flux_param in ml_params.filter(qualifier='fluxes', kind='lc', **_skip_filter_checks).to_list():
dataset = flux_param.dataset
if dataset in datasets_dsscaled:
# then we already handle the scaling (including l3)
# above in dataset-scaling
continue
# alternate backends other than legacy already account for distance via pbflux
if computeparams.kind in ['phoebe', 'legacy']:
# handle flux scaling based on distance and l3
# NOTE: this must happen AFTER dataset scaling
distance = self.get_value(qualifier='distance', context='system', unit=u.m, **_skip_filter_checks)
for flux_param in ml_params.filter(qualifier='fluxes', kind='lc', **_skip_filter_checks).to_list():
dataset = flux_param.dataset
if dataset in datasets_dsscaled:
# then we already handle the scaling (including l3)
# above in dataset-scaling
continue

fluxes = flux_param.get_value(unit=u.W/u.m**2)
fluxes = fluxes/distance**2 + l3s.get(dataset)
fluxes = flux_param.get_value(unit=u.W/u.m**2)
fluxes = fluxes/distance**2 + l3s.get(dataset)

flux_param.set_value(fluxes, ignore_readonly=True)
flux_param.set_value(fluxes, ignore_readonly=True)

# handle vgamma and rv_offset
vgamma = self.get_value(qualifier='vgamma', context='system', unit=u.km/u.s, **_skip_filter_checks)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

[project]
name = "phoebe"
version = "2.4.12"
version = "2.4.13"
description = "PHOEBE: modeling and analysis of eclipsing binary stars"
readme = "README.md"
requires-python = ">=3.7"
Expand Down
Loading

0 comments on commit 01e109c

Please sign in to comment.