Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ASI detector support and reduce excessive logging #111

Merged
merged 36 commits into from
Mar 10, 2025

Conversation

Baharis
Copy link
Contributor

@Baharis Baharis commented Feb 20, 2025

The context

This week I worked on adapting Instamatic to our ASI Medipix3 detector via the CameraServal class. Despite variety of issues, many of them boiled to the same problems. Actions listed below led to GUI crashing:

  1. Setting exposure > 10 seconds;
  2. Setting exposure ≤ 0 seconds;
  3. Collecting any images other than the preview;
  4. Starting the GUI with default config.

AD 1 & 2, I learned to my surprise that both ASI Medipix3 and Timepix3 impose hard limits on exposure: >0, ≤10 seconds. I have therefore added a validate method to CameraServal that trims any exposure passed to a [0.001, 10] range and logs a warning. If anything requests 20-second frame, it will receive a 10-second frame after approx. 10.2 seconds. It is neither pretty nor general but it gets the job done and fixes 99% of issues.

EDIT: After reviewing and consulting this issue, instead of trimming the values to the limits, I have implemented a way to collect <0 and >10 second exposures. The former returns empty images, the latter sums shorter frames collected via a re-implemented get_movie. I also synchronized CameraServal.get_image and get_movie and removed an issue with GUI freezing at exit. For more details, see this comment.

AD 3, In #108, I addressed 0-second preview crashing the GUI. Since then I learned that even if exposure > 0, the GUI itself sometimes requests 0-second images. This happens because VideoSteram.get_image starts by setting self.grabber.frametime = 0 to "prevent it lagging data acquisition" and does some stuff, but before self.grabber.acquireInitiateEvent.set(), the ImageGraber.run in 2nd thread races to self.cam.get_image and often manages to call it – with frametime=0. I found that replacing self.grabber.frametime set/restore with self.block()/unblock() achieves the same effect while never requesting a dreaded 0-second image.

AD 4, As for why the default config fails, the "cooldown" between exposures is hard-coded to 0.5ms. In fact, it should be higher if using Timepix3 asic or "PixelDepth": 24. On his fork, @hzanoli suggests using different hard-coded values based on a new config parameter asic but I believe a more elegant solution is to derive it from the difference between initial ExposureTime and TriggerPeriod.

Furthermore. I noticed that for "PixelDepth": 24, the preview was much smoother than for "PixelDepth": 12. After some tests I realized that in 12-bit mode, data was passed as pgm and in 24-bit – as tiff (also enforced for Timepix3 asic by @hzanoli). I tested pgm and tiff and found out that reading tiff with Pillow was significantly faster. So here I ask: why even use pgm in the first place (benchmark code)?

file_format                pgm             png            tiff
bit_depth                                                    
1             117.41 ± 6.32 ms   nan ±  nan ms  0.17 ± 0.03 ms
4            117.24 ± 11.10 ms   nan ±  nan ms  0.17 ± 0.04 ms
6            151.23 ± 48.35 ms   nan ±  nan ms  0.16 ± 0.01 ms
8              0.08 ±  0.22 ms  1.41 ± 0.09 ms  0.20 ± 0.07 ms
10           153.95 ± 13.69 ms   nan ±  nan ms  0.63 ± 0.10 ms
12           153.15 ± 23.07 ms   nan ±  nan ms  0.46 ± 0.17 ms
14           150.50 ± 17.25 ms   nan ±  nan ms  0.68 ± 0.11 ms
16             1.84 ±  0.20 ms  2.10 ± 0.15 ms  0.42 ± 0.15 ms
20              nan ±   nan ms   nan ±  nan ms  2.17 ± 0.22 ms
24              nan ±   nan ms   nan ±  nan ms  2.21 ± 0.21 ms
32              nan ±   nan ms   nan ±  nan ms  2.22 ± 0.19 ms

When debugging, I was trying to use instamatic log but it was extremely inconvenient because currently it captures every debug message from every package. Unrelevant debug messages, mostly from PIL, are generated in an absurd rate: ~2000 if running instamatic for 12 seconds, ~100 MB / hour. So here I suggest modifying GUI file handler so that:

  • Instamatic logger.debug messages are logged only if at least -v is set;
  • Imported library debug statements are logged only if at least -vv is set;
  • Log messages include whole file path instead of just module name if -vvv is set.

Finally, during my tests I noticed that some of the links were outdated or wrong. I was also disappointed I could not copy-and-paste Instamatic citation. So I changed that: looks slightly different but CTRL+C works!

image

Major changes

  • CameraServal: Derive the length of cooldown period between exposures from config (trigger - exposure) rather than using a hard-coded value;
  • CameraServal: Hard-limit exposure times between 0.001 and 10 seconds;
  • EDIT: CameraServal: If requested exposure equal or lower than 0, return an empty array.
  • EDIT: CameraServal: If requested exposure >10, return sum of images from re-implemented get_movie.
  • instamatic/main.py (GUI): Limit log size from ~100 MB/h to KB/h by not-logging debug messages unless a new flag -v (for insamatic) for -vv (for imported libraries) is specified.

Minor changes

  • VideoStream.get_image: Use block() instead of frametime=0 to temporarily pause the preview.
  • CameraServal: Send images in tiff rather than significantly-slower-to-decode pgm.
  • EDIT: CameraServal: Re-implement get_movie, synchronize it with get_image to prevent errors.
  • serval.yaml: Add comments, update values to work for ASI Timepix3 (credit: @hzanoli);
  • AboutFrame: Fix links and make author & reference fields select-and-copy-able.

Bugfixes

  • cred/experiment.py: Allow using simulated FEI camera (credit: @hzanoli);
  • camera_serval.py: Remove unused import statements;
  • AboutFrame: Fix __main__ so that it can be shown stand-alone if ever desired;
  • THANKS.md: Fix the link so that it points to the correct contributors list;
  • EDIT: gui.py: on close(), redirect sys.stdout and sys.stderr back so that it does not freeze GUI;
  • EDIT: tem_server.py: make imports global to facilitate running via main / instamatic.temserver.

Effect on the codebase

Using ASI detectors in instamatic should now work and shouldn't randomly crash at every chance. I analyzed the code and successfully ran some RED (!), and I think replacing framerate=0 with block() should be fine, but arguably I do now know if it does not affect something I did not touch. Likewise with pgm: I don't understand why it was used in the first place, it is not faster. I do not believe anyone will miss debug logs from external libraries, but importantly now -v is needed to see instamatic debug messages as well.

hzanoli and others added 12 commits February 17, 2025 14:27
…f setting `self.grabber.frametime = 0` to avoid failing 0-second frames
…need for 'asic' config

Reading tiff is faster than pgm virtually always, especially if PixelDepth = 2**N (benchmark: https://chatgpt.com/canvas/shared/67b4942e54808191bf08685ce305112d):
bit_depth                  pgm             png            tiff
1             117.41 ± 6.32 ms   nan ±  nan ms  0.17 ± 0.03 ms
4            117.24 ± 11.10 ms   nan ±  nan ms  0.17 ± 0.04 ms
6            151.23 ± 48.35 ms   nan ±  nan ms  0.16 ± 0.01 ms
8              0.08 ±  0.22 ms  1.41 ± 0.09 ms  0.20 ± 0.07 ms
10           153.95 ± 13.69 ms   nan ±  nan ms  0.63 ± 0.10 ms
12           153.15 ± 23.07 ms   nan ±  nan ms  0.46 ± 0.17 ms
14           150.50 ± 17.25 ms   nan ±  nan ms  0.68 ± 0.11 ms
16             1.84 ±  0.20 ms  2.10 ± 0.15 ms  0.42 ± 0.15 ms
20              nan ±   nan ms   nan ±  nan ms  2.17 ± 0.22 ms
24              nan ±   nan ms   nan ±  nan ms  2.21 ± 0.21 ms
32              nan ±   nan ms   nan ±  nan ms  2.22 ± 0.19 ms
# Conflicts:
#	src/instamatic/camera/camera_serval.py
@Baharis Baharis changed the title Serval guards Improve ASI detector support, recude excessive logging, fix links Feb 20, 2025
@Baharis Baharis changed the title Improve ASI detector support, recude excessive logging, fix links Improve ASI detector support and recude excessive logging Feb 20, 2025
@Baharis
Copy link
Contributor Author

Baharis commented Feb 20, 2025

@hzanoli , @ErikHogenbirkASI Could you kindly confirm that both Medipix3 and Timepix3 allow for exposures greater than 0 and lower/equal than 10? This is what I found in both documentations.

@Baharis
Copy link
Contributor Author

Baharis commented Feb 20, 2025

@ErikHogenbirkASI Do you remember how you determined that pgm is faster than tiff? My PIL benchmark suggests it is much slower, especially if the bit depth is not 2**N i.e. in this case.

file_format                pgm             png            tiff
bit_depth                                                    
1             117.41 ± 6.32 ms   nan ±  nan ms  0.17 ± 0.03 ms
4            117.24 ± 11.10 ms   nan ±  nan ms  0.17 ± 0.04 ms
6            151.23 ± 48.35 ms   nan ±  nan ms  0.16 ± 0.01 ms
8              0.08 ±  0.22 ms  1.41 ± 0.09 ms  0.20 ± 0.07 ms
10           153.95 ± 13.69 ms   nan ±  nan ms  0.63 ± 0.10 ms
12           153.15 ± 23.07 ms   nan ±  nan ms  0.46 ± 0.17 ms
14           150.50 ± 17.25 ms   nan ±  nan ms  0.68 ± 0.11 ms
16             1.84 ±  0.20 ms  2.10 ± 0.15 ms  0.42 ± 0.15 ms
20              nan ±   nan ms   nan ±  nan ms  2.17 ± 0.22 ms
24              nan ±   nan ms   nan ±  nan ms  2.21 ± 0.21 ms
32              nan ±   nan ms   nan ±  nan ms  2.22 ± 0.19 ms

@Baharis
Copy link
Contributor Author

Baharis commented Feb 20, 2025

@hzanoli This PR addresses the same issues as your fork. If I understand everything correctly, it makes the asic config addition unnecessary since tiff becomes the new default file format and exposure cooldown is dynamically determined from existing fields. I believe your Timepix3 clients should now be able to use instamatic on the new main branch, whether they applied the config changes you suggested or not.

@stefsmeets stefsmeets self-requested a review February 24, 2025 12:43
Copy link
Member

@stefsmeets stefsmeets left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, looks good to me. Glad that you are willing to debug the serval integration.

Let me know when this is ready to merge.

@hzanoli
Copy link
Contributor

hzanoli commented Feb 25, 2025

@hzanoli , @ErikHogenbirkASI Could you kindly confirm that both Medipix3 and Timepix3 allow for exposures greater than 0 and lower/equal than 10? This is what I found in both documentations.

Yes. We enforce 0 < Exposure Time <=10s for both Medipix3 and Timepix3 on Serval as described in the manual.

@Baharis
Copy link
Contributor Author

Baharis commented Feb 25, 2025

Observation regarding pgm versus tiff

@ErikHogenbirkASI, @emx77 Concerning the pgm versus tiff performance, I repeated my benchmark on the Windows 10 machine that runs Instamatic in the lab and it confronted them with my aforementioned tests on Windows 11. For good measure, I have also directly tested on images received by Serval toolkit from Serval server. Exact numbers below.

From what I gathered, Pillow used to handle pgm files much faster but since PR #6119, images whose max value is not 2**8 or 2**16 are scaled up to these respective values instead. Here is a script you can run to confirm this. This is very undesired when the absolute values of individual pixels actually matter, much more than the fact that processing takes longer.

What I guess happened from the development POV is that in the original code @ErikHogenbirkASI favored pgm because they were smaller (faster transfer) and their processing time was comparable/better than tiff. However, with the changes being requested in the last 3 years, the benefit of having smaller pgm files might have became overshadowed by changes in their handling by Pillow. This inadvertently led to me advertising for tiff over pgm.

I now see the drawback of using tiff: in a 12-bit mode, pgm files are indeed 4x smaller than their tiff counterpart. Indeed, assuming 1 Gb/s a transfer of 16 or 32-bit tiff file should take 4 or 8 ms, and it is worth to try to shorten it down to 1 or 2 ms. However, IMO this can not be done at the cost of changing the scale or processing for half a second.

Benchmark results

Windows 11, Python 3.13, PIL 11.1.0, AMD Ryzen 7 PRO 8840HS, 3.3Ghz, 64 GB RAM (1000 samples):

file_format                pgm             png            tiff
bit_depth                                                    
1             117.41 ± 6.32 ms   nan ±  nan ms  0.17 ± 0.03 ms
4            117.24 ± 11.10 ms   nan ±  nan ms  0.17 ± 0.04 ms
6            151.23 ± 48.35 ms   nan ±  nan ms  0.16 ± 0.01 ms
8              0.08 ±  0.22 ms  1.41 ± 0.09 ms  0.20 ± 0.07 ms
10           153.95 ± 13.69 ms   nan ±  nan ms  0.63 ± 0.10 ms
12           153.15 ± 23.07 ms   nan ±  nan ms  0.46 ± 0.17 ms
14           150.50 ± 17.25 ms   nan ±  nan ms  0.68 ± 0.11 ms
16             1.84 ±  0.20 ms  2.10 ± 0.15 ms  0.42 ± 0.15 ms
20              nan ±   nan ms   nan ±  nan ms  2.17 ± 0.22 ms
24              nan ±   nan ms   nan ±  nan ms  2.21 ± 0.21 ms
32              nan ±   nan ms   nan ±  nan ms  2.22 ± 0.19 ms

Windows 10, Python 3.12, PIL ?, Intel Core i7-2600, 3.4GHz, 16 GB RAM (100 samples):

file_format                pgm             png            tiff
bit_depth                                 
8               0.42 ± 2.08 ms  2.71 ± 0.07 ms  0.44 ± 0.04 ms
10           423.82 ± 12.52 ms    nan ± nan ms  1.18 ± 0.07 ms
12            422.33 ± 9.23 ms    nan ± nan ms  1.14 ± 0.04 ms
14            414.75 ± 9.90 ms    nan ± nan ms  1.12 ± 0.12 ms
16              2.75 ± 0.10 ms  4.32 ± 0.16 ms  0.89 ± 0.24 ms
20                nan ± nan ms    nan ± nan ms  3.30 ± 0.14 ms
24                nan ± nan ms    nan ± nan ms  3.31 ± 0.14 ms
32                nan ± nan ms    nan ± nan ms  3.30 ± 0.13 ms

Windows 10, same setup but processing actual experimental files received by serval toolkit (100 samples):

file_format                pgm            tiff
bit_depth                                 
12           430.80 ± 15.55 ms  2.40 ± 4.83 ms
24                nan ± nan ms  4.45 ± 0.25 ms

@Baharis Baharis marked this pull request as draft February 26, 2025 12:00
@stefsmeets
Copy link
Member

stefsmeets commented Mar 3, 2025

Regarding the last point of controversy revolving around the exposure limits enforced by Serval, I read your suggestions @stefsmeets but I politely disagree. Allowing CameraServal to raise exceptions there or wrapping every cam.get_image in try/except are both riddled with issues. get_image methods are called in 77 places, some of them in GUI and some outside. In places, the exposure can be also calculated dynamically, meaning that even if we ourselves care not to exceed the limit, Instamatic might decide otherwise.

Fair enough, I don't want to get in the way of unnecessarily limiting the code. I'm happy with the solution you suggested. Let me know if the PR is ready and I will do a review.

@Baharis Baharis self-assigned this Mar 4, 2025
@Baharis
Copy link
Contributor Author

Baharis commented Mar 4, 2025

Alright, I modified my proposition as to how logging should be handled. With the current code changes:

  • Instamatic logger.debug messages are logged only if at least -v is set;
  • Imported library debug statements are logged only if at least -vv is set;
  • Log messages include whole file path instead of just module name if -vvv is set.

This means that e.g. logger.debug(response) when receiving any Merlin command response or self.logger.debug(f'Image variance: {imgvar}') at each step of AutoCRED crystal tracking will not be written by GUI unless it is run with -v, and debug logs from other libraries like __init__:47 | DEBUG | Creating converter from 7 to 5 won't be written unless it is run as -vv.

As a showcase of how this new system is adaptable, and because even with -vv I had no idea what external loggers are trying to tell me, by adding a single line to the code I also made it so that if -vvv, the message contains full path of logger instead of module only, which lets one learn that the particular message raised from __init__ is, in fact, produced by matplotlib: C:\Path\To\My\instamatic\venv\Lib\site-packages\h5py\__init__.py:47 | DEBUG | Creating converter from 7 to 5.

In my opinion the PR is ready to be merged, however I do not mind waiting a few days more for suggestions. Ultimately, since it by default removes instamatic debug messages from the log, I guess there might be feedback against it.

@Baharis Baharis requested a review from stefsmeets March 4, 2025 15:39
Copy link
Member

@stefsmeets stefsmeets left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, I made some suggestions to simplify the code.

Main points (also see comments):

  1. I'm not sure if the lock is necessary, and the synchronized_lock thing is somewhat complicated and it's not clear to me why it is necessary.
  2. Try to optimize the logging statements to defer formatting: https://docs.python.org/3/howto/logging.html#optimization

Comment on lines 76 to 82
logger.warning(f'{self.BAD_EXPOSURE_MSG}: {exposure}')
n_triggers = math.ceil(exposure / self.MAX_EXPOSURE)
exposure1 = (exposure + self.dead_time) / n_triggers - self.dead_time
arrays = self.get_movie(n_triggers, exposure1, binsize, **kwargs)
array_sum = sum(arrays, np.zeros_like(arrays[0]))
scaling_factor = exposure / exposure1 * n_triggers # account for dead time
return (array_sum * scaling_factor).astype(array_sum.dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider moving this bit to its own function to be in line with the other 2 options:

Suggested change
logger.warning(f'{self.BAD_EXPOSURE_MSG}: {exposure}')
n_triggers = math.ceil(exposure / self.MAX_EXPOSURE)
exposure1 = (exposure + self.dead_time) / n_triggers - self.dead_time
arrays = self.get_movie(n_triggers, exposure1, binsize, **kwargs)
array_sum = sum(arrays, np.zeros_like(arrays[0]))
scaling_factor = exposure / exposure1 * n_triggers # account for dead time
return (array_sum * scaling_factor).astype(array_sum.dtype)
logger.warning(f'{self.BAD_EXPOSURE_MSG}: {exposure}')
return _stacked_image(exposure, binsize, **kwargs)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed... I will actually think about how to do this even better because get_movie might have the same problem and I want to do this well. For example, if requested image with exposure 12, it should return sum of [6, 6] movie, but if requested movie with exposures [12, 12], it should collect movie [6, 6, 6, 6] and then sum exposures pairwise. So WIP.

Copy link
Contributor Author

@Baharis Baharis Mar 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stefsmeets Ok, this took some mental gymnastics but I have re-structured this part of the code again so now it not only looks better and has less repetition but also properly handles movies:

  • All get_image and get_movie are redirected into a new _get_image
  • get_movie properly return [] if requested n_frames=0;
  • get_image / get_movie with exposure < lower limit now return np.zeros / [np.zeros];
  • get_image / get_movie with OK exposure pass through 1 new if but work exactly as before;
  • get_image with exposure > upper limit now works and returns a sum of movie arrays;
  • get_movie with exposure > upper limit now ALSO works, returns a list of partial sums of a more-trigger movie: if requested exposures [12, 12], they will come as 2-sums of [6, 6, 6, 6].

@Baharis
Copy link
Contributor Author

Baharis commented Mar 5, 2025

A lengthy answer as for why I added the CameraServal.lock and @synchronized decorator:

Whenever the Camera constructor is called, i.e. almost always when using instamatic, the object returned is either a subclass of CameraBase or, if stream=True and CameraSublass.streamable=True, a VideoStream. In particular, I am using ASI camera that is streamable, so by default whenever I am running instamatic or instamatic.controller, my ctrl.cam is a VideoStream.

In contrast to CameraSublass, VideoStream instances stream continuously, whether anything is observing the stream or not. At __init__, VideoStream calls self.start(), which calls self.grabber.start_loop(), which starts a Thread(target=self.run, ...), where run continuously calls self.cam.get_image. To reiterate, this happens whenever VideoStream is created, meaning that getting ctrl with instamatic:initialize is enough to start a daemon-thread background stream.

Now VideoStream defines this neat __getatt__ interface where it passes any unrecognized attribute requests to self.cam. However, it does also define its own smart get_image method (modified in this PR) that blocks the passive image collection while a dedicated image is collected. However, it does NOT do the same for get_movie. get_movie is undefined and passed delegated directly to the camera. This means that if you ever call get_movie, the passive data collection will NOT stop and your camera will start concurrently receiving instructions from two sources: background get_image scripts and user-requested get_movie. This leads to a mess described in this comment.

From what I can see, get_movie is defined in multiple places, but rarely, if ever, used. I encountered this issue because I was testing using instamatic.controller rather than instamatic GUI that does not readily allow to collect movies. Since I did not fully understand the streaming mechanism and considered that other cameras might be better protected against that, I decided to prevent ServalCamera from ever receiving information from two different sources by adding this lock. However, after getting additional 2 weeks of experience and constructing this answer, I came to conclusion that it might be more reasonable to instead implement VideoStream.get_movie and add the same streaming lock in there. This will prevent the issue for all cameras, not only ASI ones.

Comment on lines 162 to 174
def get_movie(self, n_frames: int, exposure=None, binsize=None):
self.block() # Stop the passive collection during single-frame acquisition
self.grabber.request = MovieRequest(
n_frames=n_frames, exposure=exposure, binsize=binsize
)
self.grabber.acquireInitiateEvent.set()
self.grabber.acquireCompleteEvent.wait()
with self.grabber.lock:
movie = self.acquired_media
self.grabber.request = None
self.grabber.acquireCompleteEvent.clear()
self.grabber.frametime = current_frametime
return frame
self.unblock() # Resume the passive collection
return movie
Copy link
Contributor Author

@Baharis Baharis Mar 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to allow the VideoStream to pause preview while collect movies, this is the best way I found. Instead of defining grabber.exposure and .binsize, make a new grabber.request = None and set it to a new instance of MediaRequest when needed. Depending if it is ImageRequest or MovieRequest, collect the media, display (last) frame, and return it after releasing all locks/events. With this I can remove CameraServal lock. Testing using simulator right now, but I likely won't have a chance to test it on Serval until Friday.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I managed to test it on simulated camera and Serval and both work, but I want to rewrite the Serval class because as you noticed I made it messy and it does not handle long-exposure movies... Hopefully this will be ready tomorrow.

@Baharis
Copy link
Contributor Author

Baharis commented Mar 6, 2025

I am so sorry @stefsmeets , but the best way I found to address your request for a better-structured logic inside CameraServal required another rewrite... Nevertheless with the last change I managed to completely fix CameraServal.get_movie, added option to collect movies with exposure over over 10 s, removed code repetition, and isolated / better named the scaling logic into a new CameraServal.spliced_sum.

@Baharis Baharis requested a review from stefsmeets March 10, 2025 08:50
Copy link
Member

@stefsmeets stefsmeets left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, and thanks for all the throught and care that went into this. I'm happy to merge this.

Comment on lines +140 to +148
if request is None:
self.frame = media
elif isinstance(request, ImageRequest):
self.requested_media = self.frame = media
self.grabber.acquireCompleteEvent.set()
else: # isinstance(request, MovieRequest):
self.requested_media = media
self.frame = media[-1]
self.grabber.acquireCompleteEvent.set()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shuffling these around to make the default (None) be more logical.

Suggested change
if request is None:
self.frame = media
elif isinstance(request, ImageRequest):
self.requested_media = self.frame = media
self.grabber.acquireCompleteEvent.set()
else: # isinstance(request, MovieRequest):
self.requested_media = media
self.frame = media[-1]
self.grabber.acquireCompleteEvent.set()
if isinstance(request, ImageRequest):
self.requested_media = self.frame = media
self.grabber.acquireCompleteEvent.set()
elif isinstance(request, MovieRequest):
self.requested_media = media
self.frame = media[-1]
self.grabber.acquireCompleteEvent.set()
else:
self.frame = media

Copy link
Contributor Author

@Baharis Baharis Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are ordered according to their probability; this ordering is the most logical to me because typically (i.e. 20 times per second) just one check is necessary rather than three. With self.frame being last, it takes the longest, albeit the difference is somewhere in ns.

@Baharis Baharis merged commit 73b3039 into instamatic-dev:main Mar 10, 2025
7 checks passed
@Baharis Baharis deleted the serval_guards branch March 25, 2025 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants