You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Had a very helpful chat with @almarklein the other day about abstraction layers and wanted to put some of my thoughts down here.
here were my thoughts leading up to the chat and what I came away thinking
ndv is coming along nicely, as a single array viewer. The model/view pattern is decent (though definitely not perfect). We can effectively support widgets in Qt, Wx, and Jupyter, embedding a scene graph backed by either vispy, suggesting we have at least some level of a decent pure model backing all this stuff.
We absolutely want to be able to overlay additional "stuff". @gselzer has already done some nice work overlaying ROI shapes (to select a region), and has also made lovely image histogram widgets for both vispy and pygfx. Additionally, many people have requested the ability to overlay different images at different scales, or grids of images at different places, etc.. All of this requires a better model of the scene graph.
For now, the interface/adaptors between the controller and the vispy/pygfx scene graphs have been built ad-hoc, with "handle" objects that normalize those backends to a common internal API. (For example an image on a canvas needs an ImageHandle ... which we've implemented for both vispy and pygfx). However, that pattern is extremely similar to what I was trying to do with microvis, but I liked the patterns in microvis a bit better than what has evolved here.
In trying to see whether I could use the same ideas from microvis here, I went back to clean up microvis (updating the pydantic models and trying to make things all a bit clearer), and that experiment now lives at scenex. For example, in scenex, there is an Image model (representing a an image node in a scene graph), and then each supported backend has an adapter (like the pygfx Image that can receive change events on the (psygnal-evented) model. (all adhering to an ImageAdaptor ABC that is similar to our ImageHandle above). While i very much enjoy how it feels to use the microvis/scenex pattern (it feels great in an IDE, and in Ipython, and serializes nicely), and I would like to explore whether that could replace the Canvas and ImageHandle patterns we've got going on in ndv, it's always hard to escape the fear that we're possibly doing too much abstracting.
So I reached out to Almar, who was instrumental in both vispy and pygfx and is also an expert in implementing good abstraction patterns, just to get his thoughts. One very good reminder there was about the cost/benefit of abstracting:
the benefit of abstraction, obviously, is a standard interface that you can use to swap in/out various backends that implement your needs. It gives some degree of "future proofing", in that it makes very clear what you need from an external API to be able to support your needs. A big motivation for doing this all in the first place can from frustrations in the inflexibilities of napari, which is very tightly coupled to both Qt (making it extremely hard to offer people a jupyter interface) and VisPy (making it hard to use more modern libraries like pygfx or datoviz). So that's why I go through all the hassle of defining interfaces and abstractions
the cost of abstraction is that you essentially limit yourself to the intersection of features common to all backends. And where their internal models conflict, you can bang your head trying to come up with a common pattern (as happened with the camera model in microvis task: implement new camera model tlambert03/microvis#38, feat: start new camera model tlambert03/microvis#47). You can of course add "escape hatches" that let someone provide backend-specific stuff, for example, to get the most out of pygfx while using it as a backend for scenex/ndv. But once you do that, the abstraction leaks and people start using your program in a way that tightly relies on using it with a specific backend. If one backend gets the most attention, then eventually you might as well have just used that backend. One could also argue that another cost is complexity, but that is a bit more subtle: since a decent abstraction can also quickly direct the developer to the exact place where "our" library interacts with "their" library. For example, it's very easy to see how/where scenex tells pygfx to set the image colormap by grepping for _snx_set_cmap.
So, with those costs/benefits I mind, I don't particularly dislike having a backend adaptor pattern. There is quite a bit of functionality in the intersection of the backends we're interested in. And even if we really only use it for a single backend, it still accomplishes some degree of future proofing by clearly delineating what you need from an external library, and enables things like the nice model/view pattern in scenex, even if the backend library wasn't specifically designed for that kind of thing. But, it does raise the question of why we support both vispy and pygfx. The answer for me has always been something along the lines of "vispy is more mature and maybe a bit more feature rich at the moment, but pygfx uses more modern tech, and (imho) has a clearer codebase". But it's worth re-evaluating that. I will say it's been easier to do things like off-screen rending for jupyter using pygfx, without needing to use xvfb for example. And i suspect that pygfx will receive more attention in the coming year(s).
I guess some action items or questions here then are
should we continue to support and test both and include them both on the install grid?
what exactly can we achieve using vispy that we can't using pygfx? in other words, what would we lose by dropping it?. Whether that be in ease of installation, performance, or the ability to render/express something specific. Yes there are some things like image gamma, but I found that pretty easy to add in a PR to pygfx, so we can certainly help build where needed.
should we try to poke at scenex and replace our adapter layer here (well, that's more of a "I'd like us to" than a "should we")... but that might hit issues of camera transforms (which perhaps @almarklein could help us resolve)
The text was updated successfully, but these errors were encountered:
I agree with most if not all of what has been written here!
what exactly can we achieve using vispy that we can't using pygfx? in other words, what would we lose by dropping it?. Whether that be in ease of installation, performance, or the ability to render/express something specific. Yes there are some things like image gamma, but I found that pretty easy to add in a PR to pygfx, so we can certainly help build where needed.
I'd agree that pygfx does/could do everything that we need, and I certainly think that it is more modern - I'd think the only rationale for keeping vispy would be that supporting vispy itself is valuable, which to me is an open question. I think it is certainly more popular/widespread - might a downstream application already using vispy want to integrate ndv? It would be unfortunate to have to depend on both graphics frameworks...
should we try to poke at scenex and replace our adapter layer here (well, that's more of a "I'd like us to" than a "should we")... but that might hit issues of camera transforms (which perhaps @almarklein could help us resolve)
Long term, yeah, probably. I'd prefer to approach this as we add features needing it (e.g. I think scenex is probably overkill for what is being added in #114 but if/when we support multiple ROIs, images, etc. scenex might be more valuable).
Had a very helpful chat with @almarklein the other day about abstraction layers and wanted to put some of my thoughts down here.
here were my thoughts leading up to the chat and what I came away thinking
ndv
is coming along nicely, as a single array viewer. The model/view pattern is decent (though definitely not perfect). We can effectively support widgets in Qt, Wx, and Jupyter, embedding a scene graph backed by either vispy, suggesting we have at least some level of a decent pure model backing all this stuff.ImageHandle
... which we've implemented for both vispy and pygfx). However, that pattern is extremely similar to what I was trying to do with microvis, but I liked the patterns in microvis a bit better than what has evolved here.Image
model (representing a an image node in a scene graph), and then each supported backend has an adapter (like the pygfxImage
that can receive change events on the (psygnal-evented) model. (all adhering to anImageAdaptor
ABC that is similar to ourImageHandle
above). While i very much enjoy how it feels to use the microvis/scenex pattern (it feels great in an IDE, and in Ipython, and serializes nicely), and I would like to explore whether that could replace theCanvas
andImageHandle
patterns we've got going on inndv
, it's always hard to escape the fear that we're possibly doing too much abstracting._snx_set_cmap
.The text was updated successfully, but these errors were encountered: