Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DM-42077: Fix numpydoc doc strings #922

Merged
merged 17 commits into from
Dec 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .github/workflows/docstyle.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,18 @@ jobs:
uses: lsst/rubin_workflows/.github/workflows/docstyle.yaml@main
with:
args: "python/lsst/daf/butler/core/"
numpydoc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4

- name: Install numpydoc
run: |
python -m pip install --upgrade pip
python -m pip install numpydoc

- name: Validate docstrings
run: python -m numpydoc.hooks.validate_docstrings $(find python -name "*.py")
10 changes: 7 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: check-yaml
args:
- "--unsafe"
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.9.1
rev: 23.11.0
hooks:
- id: black
# It is recommended to specify the latest version of Python
Expand All @@ -23,6 +23,10 @@ repos:
name: isort (python)
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.0.289
rev: v0.1.7
hooks:
- id: ruff
- repo: https://github.com/numpy/numpydoc
rev: "v1.6.0"
hooks:
- id: numpydoc-validation
24 changes: 24 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -202,3 +202,27 @@ max-doc-length = 79

[tool.ruff.pydocstyle]
convention = "numpy"

[tool.numpydoc_validation]
checks = [
"all", # All except the rules listed below.
"SA01", # See Also section.
"EX01", # Example section.
"SS06", # Summary can go into second line.
"GL01", # Summary text can start on same line as """
"GL08", # Do not require docstring.
"ES01", # No extended summary required.
"RT01", # Unfortunately our @property trigger this.
"RT02", # Does not want named return value. DM style says we do.
"SS05", # pydocstyle is better at finding infinitive verb.
]
exclude = [
"^test_.*", # Do not test docstrings in test code.
'^spatial\..*', # Do not test doc strings in the spatial.py test code.
'^lex\..*', # Lexer
'^parserLex\.', # Docstrings are not numpydoc
'^parserYacc\.', # Docstrings are not numpydoc
'^commands\.', # Click docstrings, not numpydoc
'^__init__$',
'\._[a-zA-Z_]+$', # Private methods.
]
44 changes: 35 additions & 9 deletions python/lsst/daf/butler/_butler.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,12 +64,12 @@
_LOG = getLogger(__name__)


class Butler(LimitedButler):
class Butler(LimitedButler): # numpydoc ignore=PR02
"""Interface for data butler and factory for Butler instances.

Parameters
----------
config : `ButlerConfig`, `Config` or `str`, optional.
config : `ButlerConfig`, `Config` or `str`, optional
Configuration. Anything acceptable to the `ButlerConfig` constructor.
If a directory path is given the configuration will be read from a
``butler.yaml`` file in that location. If `None` is given default
Expand Down Expand Up @@ -138,7 +138,28 @@ def _find_butler_class(
config: Config | ResourcePathExpression | None = None,
searchPaths: Sequence[ResourcePathExpression] | None = None,
) -> type[Butler]:
"""Find actual class to instantiate."""
"""Find actual class to instantiate.

Parameters
----------
config : `ButlerConfig`, `Config` or `str`, optional
Configuration. Anything acceptable to the `ButlerConfig`
constructor. If a directory path is given the configuration will be
read from a ``butler.yaml`` file in that location. If `None` is
given default values will be used. If ``config`` contains "cls"
key then its value is used as a name of butler class and it must be
a sub-class of this class, otherwise `DirectButler` is
instantiated.
searchPaths : `list` of `str`, optional
Directory paths to search when calculating the full Butler
configuration. Not used if the supplied config is already a
`ButlerConfig`.

Returns
-------
butler_class : `type`
The type of `Butler` to instantiate.
"""
butler_class_name: str | None = None
if config is not None:
# Check for optional "cls" key in config.
Expand Down Expand Up @@ -174,7 +195,7 @@ def from_config(

Parameters
----------
config : `ButlerConfig`, `Config` or `str`, optional.
config : `ButlerConfig`, `Config` or `str`, optional
Configuration. Anything acceptable to the `ButlerConfig`
constructor. If a directory path is given the configuration will be
read from a ``butler.yaml`` file in that location. If `None` is
Expand Down Expand Up @@ -216,6 +237,11 @@ def from_config(
Additional keyword arguments passed to a constructor of actual
butler class.

Returns
-------
butler : `Butler`
A `Butler` constructed from the given configuration.

Notes
-----
Calling this factory method is identical to calling
Expand Down Expand Up @@ -828,9 +854,9 @@ def get_dataset(
storage_class : `str` or `StorageClass` or `None`
A storage class to use when creating the returned entry. If given
it must be compatible with the default storage class.
dimension_records: `bool`, optional
dimension_records : `bool`, optional
If `True` the ref will be expanded and contain dimension records.
datastore_records: `bool`, optional.
datastore_records : `bool`, optional
If `True` the ref will contain associated datastore records.

Returns
Expand Down Expand Up @@ -884,9 +910,9 @@ def find_dataset(
storage_class : `str` or `StorageClass` or `None`
A storage class to use when creating the returned entry. If given
it must be compatible with the default storage class.
dimension_records: `bool`, optional
dimension_records : `bool`, optional
If `True` the ref will be expanded and contain dimension records.
datastore_records: `bool`, optional.
datastore_records : `bool`, optional
If `True` the ref will contain associated datastore records.
**kwargs
Additional keyword arguments passed to
Expand Down Expand Up @@ -1094,7 +1120,7 @@ def ingest(

Parameters
----------
datasets : `FileDataset`
*datasets : `FileDataset`
Each positional argument is a struct containing information about
a file to be ingested, including its URI (either absolute or
relative to the datastore root, if applicable), a resolved
Expand Down
8 changes: 7 additions & 1 deletion python/lsst/daf/butler/_column_type_info.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,13 @@ class ColumnTypeInfo:

@property
def ingest_date_pytype(self) -> type:
"""Python type corresponding to ``ingest_date`` column type."""
"""Python type corresponding to ``ingest_date`` column type.

Returns
-------
`type`
The Python type.
"""
if self.ingest_date_dtype is ddl.AstropyTimeNsecTai:
return astropy.time.Time
elif self.ingest_date_dtype is sqlalchemy.TIMESTAMP:
Expand Down
23 changes: 14 additions & 9 deletions python/lsst/daf/butler/_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,11 @@ class Loader(yamlLoader):
Notes
-----
See https://davidchall.github.io/yaml-includes.html

Parameters
----------
stream : `str` or `io.IO`
The stream to parse.
"""

def __init__(self, stream: str | IO): # types-PyYAML annotates 'stream' with a private type
Expand Down Expand Up @@ -295,7 +300,7 @@ def ppprint(self) -> str:
Returns
-------
s : `str`
A prettyprint formatted string representing the config
A prettyprint formatted string representing the config.
"""
return pprint.pformat(self._data, indent=2, width=1)

Expand All @@ -321,7 +326,7 @@ def fromString(cls, string: str, format: str = "yaml") -> Config:
Parameters
----------
string : `str`
String containing content in specified format
String containing content in specified format.
format : `str`, optional
Format of the supplied string. Can be ``json`` or ``yaml``.

Expand All @@ -346,7 +351,7 @@ def fromYaml(cls, string: str) -> Config:
Parameters
----------
string : `str`
String containing content in YAML format
String containing content in YAML format.

Returns
-------
Expand Down Expand Up @@ -393,7 +398,7 @@ def __initFromYaml(self, stream: IO | str | bytes) -> Config:

Parameters
----------
stream: `IO` or `str`
stream : `IO` or `str`
Stream to pass to the YAML loader. Accepts anything that
`yaml.load` accepts. This can include a string as well as an
IO stream.
Expand All @@ -414,7 +419,7 @@ def __initFromJson(self, stream: IO | str | bytes) -> Config:

Parameters
----------
stream: `IO` or `str`
stream : `IO` or `str`
Stream to pass to the JSON loader. This can include a string as
well as an IO stream.

Expand Down Expand Up @@ -699,7 +704,7 @@ def update(self, other: Mapping[str, Any]) -> None: # type: ignore[override]
Parameters
----------
other : `dict` or `Config`
Source of configuration:
Source of configuration.

Examples
--------
Expand All @@ -726,7 +731,7 @@ def merge(self, other: Mapping) -> None:
Parameters
----------
other : `dict` or `Config`
Source of configuration:
Source of configuration.
"""
if not isinstance(other, Mapping):
raise TypeError(f"Can only merge a Mapping into a Config, not {type(other)}")
Expand Down Expand Up @@ -927,7 +932,7 @@ def dumpToUri(

Parameters
----------
uri: `lsst.resources.ResourcePathExpression`
uri : `lsst.resources.ResourcePathExpression`
URI of location where the Config will be written.
updateFile : bool, optional
If True and uri does not end on a filename with extension, will
Expand Down Expand Up @@ -1119,7 +1124,7 @@ class ConfigSubset(Config):
----------
other : `Config` or `~lsst.resources.ResourcePathExpression` or `dict`
Argument specifying the configuration information as understood
by `Config`
by `Config`.
validate : `bool`, optional
If `True` required keys will be checked to ensure configuration
consistency.
Expand Down
2 changes: 1 addition & 1 deletion python/lsst/daf/butler/_config_support.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.

"""Support for configuration snippets"""
"""Support for configuration snippets."""

from __future__ import annotations

Expand Down
38 changes: 29 additions & 9 deletions python/lsst/daf/butler/_dataset_ref.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,24 @@ def direct(
) -> SerializedDatasetRef:
"""Construct a `SerializedDatasetRef` directly without validators.

Parameters
----------
id : `str`
The UUID in string form.
run : `str`
The run for this dataset.
datasetType : `dict` [`str`, `typing.Any`]
A representation of the dataset type.
dataId : `dict` [`str`, `typing.Any`]
A representation of the data ID.
component : `str` or `None`
Any component associated with this ref.

Returns
-------
serialized : `SerializedDatasetRef`
A Pydantic model representing the given parameters.

Notes
-----
This differs from the pydantic "construct" method in that the arguments
Expand Down Expand Up @@ -300,10 +318,12 @@ class DatasetRef:
``dataId``. `~DatasetIdGenEnum.DATAID_TYPE_RUN` makes a
deterministic UUID5-type ID based on a dataset type name, run
collection name, and ``dataId``.
datastore_records : `DatasetDatastoreRecords` or `None`
Datastore records to attach.

See Also
--------
:ref:`daf_butler_organizing_datasets`
Notes
-----
See also :ref:`daf_butler_organizing_datasets`
"""

_serializedType = SerializedDatasetRef
Expand Down Expand Up @@ -759,7 +779,7 @@ def replace(
run : `str` or `None`
If not `None` then update run collection name. If ``dataset_id`` is
`None` then this will also cause new dataset ID to be generated.
storage_class : `str` or `StorageClass` or `None`.
storage_class : `str` or `StorageClass` or `None`
The new storage class. If not `None`, replaces existing storage
class.
datastore_records : `DatasetDatastoreRecords` or `None`
Expand Down Expand Up @@ -791,7 +811,7 @@ def replace(
datastore_records=datastore_records,
)

def is_compatible_with(self, ref: DatasetRef) -> bool:
def is_compatible_with(self, other: DatasetRef) -> bool:
"""Determine if the given `DatasetRef` is compatible with this one.

Parameters
Expand Down Expand Up @@ -829,13 +849,13 @@ class associated with the dataset type of the other ref can be
be converted to the original python type. The reverse is not guaranteed
and depends on whether bidirectional converters have been registered.
"""
if self.id != ref.id:
if self.id != other.id:
return False
if self.dataId != ref.dataId:
if self.dataId != other.dataId:
return False
if self.run != ref.run:
if self.run != other.run:
return False
return self.datasetType.is_compatible_with(ref.datasetType)
return self.datasetType.is_compatible_with(other.datasetType)

datasetType: DatasetType
"""The definition of this dataset (`DatasetType`).
Expand Down
Loading
Loading