Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: write NeXus data files #24

Open
prjemian opened this issue Feb 14, 2017 · 4 comments
Open

ENH: write NeXus data files #24

prjemian opened this issue Feb 14, 2017 · 4 comments

Comments

@prjemian
Copy link
Contributor

prjemian commented Feb 14, 2017

A branch of the NSLS-II/suitcase repository is being developed in a GitHub fork+branch to add a NeXus suitcase.nexus.export() function with identical call signature to the suitcase.hdf5.export() function.

@prjemian
Copy link
Contributor Author

After a few hours investment (BCDA-APS@b9814ad), the basic work is ready. A couple TODO items are left in the code now. The current output is valid by the NeXus standard. All data is in the /NXentry/NXdata group. A bigger work is to hang the positioners and detectors data in the NeXus /NXentry/NXinstrument/various hierarchy for raw data and then make hard links back to the /NXentry/NXdata group.

Since I copied nexus.py from hdf5.py and then modified to taste, it seems now that I could remove the replicated functions for _clean_dict() and write _safe_attrs_assignment() from nexus.py, calling them from the hdf5.py file instead. This is trivial.

Looks like there may be some information in the JSON structures that are currently written as HDF5 group attributes. That is information that onloy appears buried in the JSON and not directly available in HDF5 datasets or groups. This needs to be reviewed with a plan to generalize.

@tacaswell
Copy link
Contributor

Can you open a PR with that code?

I argue that dumping the header information into json strings is the general solution as faking up nested dictionaries into hdf5 gets awkward (nested group with scaler datasets works, but that seems off as much of this data really should be in attributes, but then you have to flatten it). The JSON blob should be re-visited with view of how to specialize the encoding to particular file layouts. This ties back to my comment in #26 .

@prjemian
Copy link
Contributor Author

still a work-in-progress but will PR the current state

@prjemian
Copy link
Contributor Author

no unit tests created for this new code ... yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants