-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large table component #192
Conversation
TOODO: - Unit tests - Inclusion in modules (characterization and cohort diagnostics) - Download data functionality (implement promises callback?) - Make appearance as close to other tables as possible - Allow row selection callback - Allow as many customisable reactable options as possible (beyond search and filters which will not easily work) - Make more compatible with query namespeaces to automatically generate advanced column filters
Codecov Report
@@ Coverage Diff @@
## develop #192 +/- ##
===========================================
+ Coverage 86.91% 87.02% +0.11%
===========================================
Files 74 75 +1
Lines 16976 17129 +153
===========================================
+ Hits 14754 14906 +152
- Misses 2222 2223 +1
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All updates look good
This adds a component module and viewer for creating tables around very large queries (e.g. 50,000 + rows) which normally causes reactable to crash.
This paginates the results on the server using
LIMIT
andOFFSET
parameter.Note that this will only be supported by some database platforms (e.g. postgresql, sqlite, redshift and maybe some others) but not sql server which requires an "ORDER BY" clause to do off setting.
Please see the fully executable example in
extras/examples/largeTables.R
.A note on the implementation: I created and R6 class
LargeDataTable
around this that could be subclassed if you wanted to use something that isn't a connection handler. For example, you could query a webApi instance with http requests or a large CSV file. This implementation shouldn't be too hard so if that happens I can create some more classes around that.Once we agree the API for this is good enough I will add it to the CohortDiagnostics and Characterization modules as well as anywhere else with tables that frequently break apps.