-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve ASI detector support and reduce excessive logging #111
Conversation
…f setting `self.grabber.frametime = 0` to avoid failing 0-second frames
…need for 'asic' config Reading tiff is faster than pgm virtually always, especially if PixelDepth = 2**N (benchmark: https://chatgpt.com/canvas/shared/67b4942e54808191bf08685ce305112d): bit_depth pgm png tiff 1 117.41 ± 6.32 ms nan ± nan ms 0.17 ± 0.03 ms 4 117.24 ± 11.10 ms nan ± nan ms 0.17 ± 0.04 ms 6 151.23 ± 48.35 ms nan ± nan ms 0.16 ± 0.01 ms 8 0.08 ± 0.22 ms 1.41 ± 0.09 ms 0.20 ± 0.07 ms 10 153.95 ± 13.69 ms nan ± nan ms 0.63 ± 0.10 ms 12 153.15 ± 23.07 ms nan ± nan ms 0.46 ± 0.17 ms 14 150.50 ± 17.25 ms nan ± nan ms 0.68 ± 0.11 ms 16 1.84 ± 0.20 ms 2.10 ± 0.15 ms 0.42 ± 0.15 ms 20 nan ± nan ms nan ± nan ms 2.17 ± 0.22 ms 24 nan ± nan ms nan ± nan ms 2.21 ± 0.21 ms 32 nan ± nan ms nan ± nan ms 2.22 ± 0.19 ms
# Conflicts: # src/instamatic/camera/camera_serval.py
@hzanoli , @ErikHogenbirkASI Could you kindly confirm that both Medipix3 and Timepix3 allow for exposures greater than 0 and lower/equal than 10? This is what I found in both documentations. |
@ErikHogenbirkASI Do you remember how you determined that
|
@hzanoli This PR addresses the same issues as your fork. If I understand everything correctly, it makes the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, looks good to me. Glad that you are willing to debug the serval integration.
Let me know when this is ready to merge.
Yes. We enforce |
Observation regarding
|
Co-authored-by: Henrique Zanoli <[email protected]>
Fair enough, I don't want to get in the way of unnecessarily limiting the code. I'm happy with the solution you suggested. Let me know if the PR is ready and I will do a review. |
Alright, I modified my proposition as to how logging should be handled. With the current code changes:
This means that e.g. As a showcase of how this new system is adaptable, and because even with In my opinion the PR is ready to be merged, however I do not mind waiting a few days more for suggestions. Ultimately, since it by default removes instamatic debug messages from the log, I guess there might be feedback against it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I made some suggestions to simplify the code.
Main points (also see comments):
- I'm not sure if the lock is necessary, and the
synchronized_lock
thing is somewhat complicated and it's not clear to me why it is necessary. - Try to optimize the logging statements to defer formatting: https://docs.python.org/3/howto/logging.html#optimization
logger.warning(f'{self.BAD_EXPOSURE_MSG}: {exposure}') | ||
n_triggers = math.ceil(exposure / self.MAX_EXPOSURE) | ||
exposure1 = (exposure + self.dead_time) / n_triggers - self.dead_time | ||
arrays = self.get_movie(n_triggers, exposure1, binsize, **kwargs) | ||
array_sum = sum(arrays, np.zeros_like(arrays[0])) | ||
scaling_factor = exposure / exposure1 * n_triggers # account for dead time | ||
return (array_sum * scaling_factor).astype(array_sum.dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving this bit to its own function to be in line with the other 2 options:
logger.warning(f'{self.BAD_EXPOSURE_MSG}: {exposure}') | |
n_triggers = math.ceil(exposure / self.MAX_EXPOSURE) | |
exposure1 = (exposure + self.dead_time) / n_triggers - self.dead_time | |
arrays = self.get_movie(n_triggers, exposure1, binsize, **kwargs) | |
array_sum = sum(arrays, np.zeros_like(arrays[0])) | |
scaling_factor = exposure / exposure1 * n_triggers # account for dead time | |
return (array_sum * scaling_factor).astype(array_sum.dtype) | |
logger.warning(f'{self.BAD_EXPOSURE_MSG}: {exposure}') | |
return _stacked_image(exposure, binsize, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed... I will actually think about how to do this even better because get_movie
might have the same problem and I want to do this well. For example, if requested image with exposure 12, it should return sum of [6, 6] movie, but if requested movie with exposures [12, 12], it should collect movie [6, 6, 6, 6] and then sum exposures pairwise. So WIP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stefsmeets Ok, this took some mental gymnastics but I have re-structured this part of the code again so now it not only looks better and has less repetition but also properly handles movies:
- All
get_image
andget_movie
are redirected into a new_get_image
get_movie
properly return[]
if requestedn_frames=0
;get_image
/get_movie
with exposure < lower limit now returnnp.zeros
/[np.zeros]
;get_image
/get_movie
with OK exposure pass through 1 new if but work exactly as before;get_image
with exposure > upper limit now works and returns a sum of movie arrays;get_movie
with exposure > upper limit now ALSO works, returns a list of partial sums of a more-trigger movie: if requested exposures [12, 12], they will come as 2-sums of [6, 6, 6, 6].
Co-authored-by: Stef Smeets <[email protected]>
Co-authored-by: Stef Smeets <[email protected]>
A lengthy answer as for why I added the Whenever the In contrast to Now From what I can see, |
src/instamatic/camera/videostream.py
Outdated
def get_movie(self, n_frames: int, exposure=None, binsize=None): | ||
self.block() # Stop the passive collection during single-frame acquisition | ||
self.grabber.request = MovieRequest( | ||
n_frames=n_frames, exposure=exposure, binsize=binsize | ||
) | ||
self.grabber.acquireInitiateEvent.set() | ||
self.grabber.acquireCompleteEvent.wait() | ||
with self.grabber.lock: | ||
movie = self.acquired_media | ||
self.grabber.request = None | ||
self.grabber.acquireCompleteEvent.clear() | ||
self.grabber.frametime = current_frametime | ||
return frame | ||
self.unblock() # Resume the passive collection | ||
return movie |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to allow the VideoStream
to pause preview while collect movies, this is the best way I found. Instead of defining grabber.exposure
and .binsize
, make a new grabber.request = None
and set it to a new instance of MediaRequest
when needed. Depending if it is ImageRequest
or MovieRequest
, collect the media, display (last) frame, and return it after releasing all locks/events. With this I can remove CameraServal
lock. Testing using simulator right now, but I likely won't have a chance to test it on Serval until Friday.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I managed to test it on simulated camera and Serval and both work, but I want to rewrite the Serval class because as you noticed I made it messy and it does not handle long-exposure movies... Hopefully this will be ready tomorrow.
I am so sorry @stefsmeets , but the best way I found to address your request for a better-structured logic inside |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, and thanks for all the throught and care that went into this. I'm happy to merge this.
if request is None: | ||
self.frame = media | ||
elif isinstance(request, ImageRequest): | ||
self.requested_media = self.frame = media | ||
self.grabber.acquireCompleteEvent.set() | ||
else: # isinstance(request, MovieRequest): | ||
self.requested_media = media | ||
self.frame = media[-1] | ||
self.grabber.acquireCompleteEvent.set() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shuffling these around to make the default (None) be more logical.
if request is None: | |
self.frame = media | |
elif isinstance(request, ImageRequest): | |
self.requested_media = self.frame = media | |
self.grabber.acquireCompleteEvent.set() | |
else: # isinstance(request, MovieRequest): | |
self.requested_media = media | |
self.frame = media[-1] | |
self.grabber.acquireCompleteEvent.set() | |
if isinstance(request, ImageRequest): | |
self.requested_media = self.frame = media | |
self.grabber.acquireCompleteEvent.set() | |
elif isinstance(request, MovieRequest): | |
self.requested_media = media | |
self.frame = media[-1] | |
self.grabber.acquireCompleteEvent.set() | |
else: | |
self.frame = media |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are ordered according to their probability; this ordering is the most logical to me because typically (i.e. 20 times per second) just one check is necessary rather than three. With self.frame being last, it takes the longest, albeit the difference is somewhere in ns.
Co-authored-by: Stef Smeets <[email protected]>
The context
This week I worked on adapting Instamatic to our ASI Medipix3 detector via the
CameraServal
class. Despite variety of issues, many of them boiled to the same problems. Actions listed below led to GUI crashing:AD 1 & 2, I learned to my surprise that both ASI Medipix3 and Timepix3 impose hard limits on exposure: >0, ≤10 seconds.
I have therefore added a validate method toCameraServal
that trims any exposure passed to a [0.001, 10] range and logs a warning. If anything requests 20-second frame, it will receive a 10-second frame after approx. 10.2 seconds. It is neither pretty nor general but it gets the job done and fixes 99% of issues.EDIT: After reviewing and consulting this issue, instead of trimming the values to the limits, I have implemented a way to collect <0 and >10 second exposures. The former returns empty images, the latter sums shorter frames collected via a re-implemented
get_movie
. I also synchronizedCameraServal.get_image
andget_movie
and removed an issue with GUI freezing at exit. For more details, see this comment.AD 3, In #108, I addressed 0-second preview crashing the GUI. Since then I learned that even if exposure > 0, the GUI itself sometimes requests 0-second images. This happens because
VideoSteram.get_image
starts by settingself.grabber.frametime = 0
to "prevent it lagging data acquisition" and does some stuff, but beforeself.grabber.acquireInitiateEvent.set()
, theImageGraber.run
in 2nd thread races toself.cam.get_image
and often manages to call it – withframetime=0
. I found that replacingself.grabber.frametime
set/restore withself.block()
/unblock()
achieves the same effect while never requesting a dreaded 0-second image.AD 4, As for why the default config fails, the "cooldown" between exposures is hard-coded to 0.5ms. In fact, it should be higher if using Timepix3 asic or
"PixelDepth": 24
. On his fork, @hzanoli suggests using different hard-coded values based on a new config parameterasic
but I believe a more elegant solution is to derive it from the difference between initialExposureTime
andTriggerPeriod
.Furthermore. I noticed that for
"PixelDepth": 24
, the preview was much smoother than for"PixelDepth": 12
. After some tests I realized that in 12-bit mode, data was passed aspgm
and in 24-bit – astiff
(also enforced for Timepix3 asic by @hzanoli). I testedpgm
andtiff
and found out that readingtiff
with Pillow was significantly faster. So here I ask: why even usepgm
in the first place (benchmark code)?When debugging, I was trying to use instamatic log but it was extremely inconvenient because currently it captures every debug message from every package. Unrelevant debug messages, mostly from PIL, are generated in an absurd rate: ~2000 if running instamatic for 12 seconds, ~100 MB / hour. So here I suggest modifying GUI file handler so that:
logger.debug
messages are logged only if at least-v
is set;-vv
is set;-vvv
is set.Finally, during my tests I noticed that some of the links were outdated or wrong. I was also disappointed I could not copy-and-paste Instamatic citation. So I changed that: looks slightly different but CTRL+C works!
Major changes
CameraServal
: Derive the length of cooldown period between exposures from config (trigger - exposure) rather than using a hard-coded value;CameraServal
: Hard-limit exposure times between 0.001 and 10 seconds;CameraServal
: If requested exposure equal or lower than 0, return an empty array.CameraServal
: If requested exposure >10, return sum of images from re-implementedget_movie
.instamatic/main.py
(GUI): Limit log size from ~100 MB/h to KB/h by not-logging debug messages unless a new flag-v
(for insamatic) for-vv
(for imported libraries) is specified.Minor changes
VideoStream.get_image
: Useblock()
instead offrametime=0
to temporarily pause the preview.CameraServal
: Send images intiff
rather than significantly-slower-to-decodepgm
.CameraServal
: Re-implementget_movie
, synchronize it withget_image
to prevent errors.serval.yaml
: Add comments, update values to work for ASI Timepix3 (credit: @hzanoli);AboutFrame
: Fix links and make author & reference fields select-and-copy-able.Bugfixes
cred/experiment.py
: Allow using simulated FEI camera (credit: @hzanoli);camera_serval.py
: Remove unused import statements;AboutFrame
: Fix__main__
so that it can be shown stand-alone if ever desired;THANKS.md
: Fix the link so that it points to the correct contributors list;gui.py
: onclose()
, redirectsys.stdout
andsys.stderr
back so that it does not freeze GUI;tem_server.py
: make imports global to facilitate running viamain
/instamatic.temserver
.Effect on the codebase
Using ASI detectors in instamatic should now work and shouldn't randomly crash at every chance. I analyzed the code and successfully ran some RED (!), and I think replacing
framerate=0
withblock()
should be fine, but arguably I do now know if it does not affect something I did not touch. Likewise withpgm
: I don't understand why it was used in the first place, it is not faster. I do not believe anyone will miss debug logs from external libraries, but importantly now-v
is needed to see instamatic debug messages as well.