-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Monte-Carlo simulations for frame unwrapping and WFM #163
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only looked at high level structure as I am unfamiliar with the algorithmic details.
radius=ch.radius, | ||
) | ||
for name, ch in wfm_choppers.items() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The module-level choppers only seem to be used in tests. Do they need to be in this module or can they be moved into the test module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could move them, but I would say those choppers are meant to be used with the FakeBeamline
, so I would leave them there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they are useful outside of ESSreduce / ScippNeutron, yes. But can you actually use them in any other package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess something that can create fake beamline data that looks like semi-realistic NeXus data could be used by other packages for testing? (even though I don't have anything specific in mind right now)
- Monitor event data including event_time_offset and event_time_zero | ||
""" | ||
|
||
# from __future__ import annotations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# from __future__ import annotations |
parameters). | ||
""" | ||
facilities = {"ess": sc.scalar(14.0, unit="Hz")} | ||
return PulsePeriod(1.0 / facilities[facility]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made an attempt to extract source information and functions into a class in ESSspectroscopy in scipp/essspectroscopy@ff52978
Does it make sense to have something like this in a separate module in ESSreduce?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keeping a database of useful data would be useful, also the wavelength range of the source, or things like a rough range of distances covered by Ltotal for each instrument?
But maybe we can delay that to another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But maybe we can delay that to another PR?
Definitely. Just wanted to let you know about it.
Raw detector data loaded from a NeXus file, e.g., NXdetector containing | ||
NXevent_data. | ||
""" | ||
return Ltotal[RunType](da.coords["Ltotal"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly to the source, does it make sense to have a Beamline class / module to encode common data? I am also thinking of metadata like the beamline's name, facility, etc, which would be useful for writing output files. See also scipp/scippneutron#473
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
) | ||
|
||
# TODO: move toa resolution to workflow parameter | ||
binned = data.bin(distance=ndist, toa=500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason why we are using distance=ndist
here instead of distance=distances
? (Not that I think it matters, just wondering if there's something subtle going on here that I'm missing.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing this out. I refactored the distances
There is a bit of funny stuff going on with bin edges and midpoints, when binning data and then sending to the bilinear interpolator, but I think I got it right now.
timeofflight = (sc.midpoints(wavelength.coords["distance"])) / velocity | ||
out = timeofflight.to(unit=time_unit, copy=False) | ||
# Include the variances computed above | ||
out.variances = variance.values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aren't the variances the variance of the wavelength?
It seems strange to set the time-of-flight variance equal to the wavelength variance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good point.
I need to compute something dimensionless when using the threshold, probably the standard deviation (sigma) over the mean (mu) or something?
I tried that but for some reason, when computing sigma/mu on the wavelengths I don't get the same values as when computing it on the tofs. I thought they would be because it's only a factor of distance * m_n / h
everywhere.
What am I missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I figured out what my mistake was. I do get the same for both. I will use that instead.
|
||
lookup_values = lookup.data.to(unit=elem_unit(toas), copy=False).values | ||
# Merge all masks into a single mask | ||
if lookup.masks: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to do this in the provider that creates MaskedTimeOfFlightLookup
? It seems odd to modify the lookup table in a function that uses it rather than produces it.
time_of_arrival: | ||
Time of arrival of the neutrons at the position where the events were recorded. | ||
For a ``tof`` simulation, this is just the position of the component (chopper or | ||
detector) where the events are recorded. For a ``McStas`` simulation, this is | ||
the position of the event monitor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was the second part meant for the distance:
field?
out = da.copy(deep=False) | ||
if out.bins is not None: | ||
parts = out.bins.constituents | ||
out.data = sc.bins(**parts) | ||
parts["data"] = tofs | ||
out.bins.coords["tof"] = _bins_no_validate(**parts) | ||
else: | ||
out.coords["tof"] = tofs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can use da.bins.assign_coords
and da.assign_coords
to avoid some back and forth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it to
if da.bins is not None:
parts = da.bins.constituents
parts["data"] = tofs
out = da.bins.assign_coords(tof=_bins_no_validate(**parts))
else:
out = da.assign_coords(tof=tofs)
Seems to work. Is that what you meant?
My assumption is that parts["data"] = tofs
does not modify the data inside da
. Am I right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, parts
is just a dict
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I need scipp=25.01
to use da.bins.assign_coords
:-(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems it was released now.
Moving frame unwrapping and WFM processing from
scippneutron
toessreduce
In this PR, we move away from our analytical computations of frame boundaries and fitting the polygons with single lines for frame unwrapping and WFM.
This method yielded errors typically towards the edge of the frames where the polygon fit would be less accurate (the lines would in some cases be outside of the polygons). Using a 3rd order fit helped but still wasn't optimal.
In this work, we instead use the
tof
package to illuminate the chopper cascade with a pulse of neutrons.We then propagate the neutrons that exit the cascade to the distances of the pixels (this is just a single broadcast operation) and build a lookup table that gives us wavelength (or time-of-flight) as a function of distance and time-of-arrival.
The results are better than with the previous method (tested on a DREAM simulation), and the code simpler.
The computation is somewhat more expensive than before, but for now it is not very noticeable.
More importantly, this opens the door to other simulation methods. Instead of
tof
, one could in the future use McStas to make a much more realistic simulation of neutrons travelling through the cascade, including effects like non-instantaneous opening of choppers or neutron reflections along the beam guide.See the
frame-unwrapping
andwfm-time-of-flight
notebooks for a description of how the algorithm works.Comparison with previous version:
Plots of dspacing vs 2theta:
true
: using the true wavelengths of the neutrons from the simulation datafrom_tof
: the current version implemented in this PRfrom_main
: the version on scippneutron/main using linear fit to polygonsfrom_cubic
: an (unpublished) improvement on scippneutron/main using a cubic fit to polygons (see https://github.com/scipp/scippneutron/tree/cubic-polygons)1D dspacing plot: