-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data buffering issue #308
Data buffering issue #308
Conversation
… that occurs with the latest LSL versions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested this in the lab and, with a .95 AUC, I was able to type on Matrix and RSVP. This seems to fix the issue! Below are the logs from the session.
Matrix Calibration Logs
Matrix Copy Phrase Logs
There are some logging duplicates and potential to change log levels added. Thanks for the quick fix!
@@ -74,7 +95,7 @@ def start_acquisition(self) -> bool: | |||
|
|||
self.inlet = StreamInlet( | |||
stream_info, | |||
max_buflen=self.max_buffer_len, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's set this as our max buffer length
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little confusing because there are two buffers to consider here. One is the buffer that LSL uses for the StreamInlet
. The other is the size of our RingBuffer
that the lsl_client uses internally. The self.max_buffer_len
is the size used by the RingBuffer
. These two buffers used to use the same max_buffer_len
without issue, but for some reason that is no longer the case and we need to increase the value for the StreamInlet
to have the data we're looking for.
The reason we need to maintain the secondary RingBuffer
is so we can do subsequent queries for the same data. Queries to the StreamInlet
will drain that buffer.
"""Pull a chunk of samples from LSL and record in the buffer. | ||
Returns the count of samples pulled. | ||
""" | ||
log.info(f"Pulling from LSL (max_samples: {self.max_samples})") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of these could be debug level
|
||
data_slice = [ | ||
record for record in data if start <= record.timestamp <= end | ||
][0:limit] | ||
log.info(f'{len(data_slice)} records returned') | ||
return data_slice | ||
log.info(f"Filtered records: {self._data_stats(data_slice)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These data stats seem to be the most helpful / informative !
bcipy/helpers/acquisition.py
Outdated
@@ -165,7 +165,7 @@ def init_lsl_client(parameters: dict, | |||
"""Initialize a client that acquires data from LabStreamingLayer.""" | |||
|
|||
data_buffer_seconds = round(max_inquiry_duration(parameters)) | |||
|
|||
log.info(f"Setting an acquisition buffer of {data_buffer_seconds} seconds") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be helpful to know the device this applies to. Right now this logs twice with the same value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion
bcipy/acquisition/record.py
Outdated
of channel information (float).""" | ||
data: List[Any] | ||
timestamp: float | ||
rownum: int = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional[int] = None
'mean_diff': round(diffs.mean(), precision), | ||
'max_diff': round(diffs.max(), precision) | ||
} | ||
return {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to return none? Then a caller could check for that condition instead of failing on a key error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My primary use case for the dict is to print out the whole thing. If it might be a None
value I would need some additional checks. I think in this case if we're not sure that a key would be in the map it would be cleaner to use a map.get(key, default_value)
.
Overview
Restructured the
lsl_client
to pull data from LSL for querying in a way that ensures the latest data is available in the buffer for querying.Ticket
https://www.pivotaltracker.com/story/show/186539529
Contributions
Test
Discussion
According to the LSL docs (https://labstreaminglayer.readthedocs.io/info/faqs.html), setting the
max_buflen
on aStreamInlet
should ensure that it keeps only that amount of data (in seconds) and discards the rest. We were relying on this functionality to get data for a decision. However, in recent versions of LSL it seems to sometimes be discarding too much data. This seems to occur after long pauses. When we queried for the latest (2400 records), we would sometimes only get 200-300. I'm still working on creating a minimal example to replicate this problem.Note also that I am still getting application crashes (Segmentation fault) if the application sits too long on the opening screen or if a task is paused for a while. I'm not sure what's causing this.