-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev session/Software highlight: Dendrify: a new framework for seamless incorporation of dendrites in Spiking Neural Networks #93
Comments
And with Brian as well 😏 |
Hi everyone,
Please let me know of what you think :) |
The abstract and paper make a strong claim: "Therefore, this category of models is unsuitable for large network simulations, where computational efficiency is a key priority." I note there is no reference serving as the basis for this claim, nor any comparative benchmark to any such tool. A sharp reviewer would ask you for that. This claim is probably based on informal observations that people struggle to scale NEURON simulations to networks of sizes point neuron simulators trivially reach. We at team Arbor think this is an artifact of outdated tooling, not a fundamental concern. Naturally, a middle ground that abstracts over some of the biophysics is going to save computational cycles, and therefore time, but that is a modelling (and tooling) choice, not so much a fundamental constraint of simulating large networks with biophysically detailed neurons. Using Arbor, there is no difference between using your machine and a supercomputer. In addition, tools like Arbor and Neuron both allow the user to scale the granularity and thus number of diff. eqns to be solved up and down. It would be interesting to see how Dendrify compares against a course Arbor or Neuron simulation for instance. So, some substance to the performance claim (and maybe achievable networks sizes) would be good for the target audience to have: for which parameters is Dendrify a solid choice and for which perhaps not? Also, in the intro about dendrites, you could discuss what Dendrify omits, for people coming from the other end (SNNs with fully detailed neurons). |
Hi thank you for you comment and your suggestions. You indeed raise some valid points that I will try to respond to. A) I stand by the claim that you mentioned because it's true!! If you read the entire paragraph we write:
Do you disagree with the claim that more complex neuron models with detailed morphology and ionic mechanisms have higher computational cost? There is no comparison between any tools here. It's just a general comment on how performance is affected when increasing complexity. B) You mention: "Naturally, a middle ground that abstracts over some of the biophysics is going to save computational cycles, and therefore time, but that is a modelling (and tooling) choice, not so much a fundamental constraint of simulating large networks with biophysically detailed neurons." Yes!! I totally agree on that. Dendrify is not just a tool, but also a theoretical framework that tries to guide people on how to create simplistic models (few-compartments) that reproduce as many key dendritic features as possible. We also note in the manuscript that:
You can take the principles we mention in the manuscript to design your model on any simulator software of your choice. C) "Using Arbor, there is no difference between using your machine and a supercomputer. ". What exactly do you mean with that? Is there any reference for that? D) "In addition, tools like Arbor and Neuron both allow the user to scale the granularity and thus number of diff. eqns to be solved up and down. It would be interesting to see how Dendrify compares against a course Arbor or Neuron simulation for instance." I would like to clarify that Dendrify is not a simulator. It's just a plugin for Brian 2 that helps create simplistic compartmental models with event-driven spiking mechanisms. All the performance benefits of our approach come a) From model and mechanisms simplification, b) Brian's optimizations. For example, models made with Dendrify+Brian are also run on NVIDIA GPUs thanks to Brian2CUDA. Notably the same model that can run on a GPU runs also on my iPad with only 1 code line being different. In this project we also cared a lot about flexibility and ease of use. Is our approach the fastest possible? I have no idea. But I do know it's fast enough, flexible and easy to learn. E) So, some substance to the performance claim (and maybe achievable networks sizes) would be good for the target audience to have: for which parameters is Dendrify a solid choice and for which perhaps not? That's actually a really nice idea and I will definitely include it in future presentations or tutorials. Thank's a lot for that. F) Also, in the intro about dendrites, you could discuss what Dendrify omits, for people coming from the other end (SNNs with fully detailed neurons). Thank's for the suggestion. We have 2 paragraphs just for that in the discussion:
|
Hi everyone. I am obviously biased and sympathetic towards the Brian2/Dendrify approach (surprise 😉 ). But taking a quick step back from the ongoing discussion here: I think it would be great to follow the dendrify presentation in the Software WG by an open discussion of this topic (e.g. "what is the right level of detail for spiking neural networks?"). Of course the trivial answer is "It depends", but I think there are some non-trivial things to say about it as well. |
@mstimberg Exactly, even the term SNN might have a different meaning depending on who you ask. I biologist might understand something fully detailed that resembles a brain slice, while a neuroAI researcher something like a deep net of point spiking neurons. Different levels of abstractions serve different purposes but are all useful for the scientific community. Model detail should be adjusted always based on the question you want to answer. |
A) Your claim is "Therefore, this category of models is unsuitable for large-network simulations, where computational efficiency is a key priority." I can't find backup of that claim. B) Good :) In fact, there tools that 'scale' morphologies for biophys simulators (Wybo's NEAT and Cuntz's TREES come to mind). C) It's a design decision that was basically the reason to start the project :) Modelling with Arbor is fundamentally independent from the execution, and executing Arbor on HPC (GPU or no) is as simple as on your machine. No coding or restructuring of the simulation required, just let Arbor know. D) This goes back to the initial claim: if you say you are faster, you're supposed to say by how much and in which conditions. That's what makes a publication a publication. E) Great! In the end you want to show things you can do, and position your tool; in which cases researchers should be interested and in which perhaps consider other alternatives. F) I hadn't gotten this far, thanks! |
@mstimberg It harkens back to the discussion for the choose-your-simulator-tool: what's the right level? In some cases an immediate answer may be possible, but in many not. Potentially, intermediate tools or modes such as Dendrify/NEAT/TREES, but also co-sim efforts, could help a researcher explore and find that right level. I suppose the difficulty in actually validating that answer to this question makes it thus also hard to discuss in objective terms. I think it is simply a choice you make, and correct if you think you need to: the level of the simulator that you pick? Maybe a detailed decision tree, based on our collective experiences and knowledge, is an idea? |
A) Please explain to me why you find this claim problematic or why it requires further justification. Not all students or researchers have access to High Performance Computing Clusters or the resources of the Human/Blue Brain Project. How can an average PC run a network of N >= 10^4 if each neuron model consists of >10^3 segments, >10^4 synapses and a realistic distribution of numerous ion channels. I hope that you don't see this claim as an attempt to disregard efforts like Arbor, because it's really not. Our goal was to simply extend point-neuron models so that they respect some fundamental biological mechanisms that are arguably ignored in the vast majority of SNNs studies out there. Not to substitute detailed biophysical models, or any established simulator software. B) Let me include Neuron_reduce in your list. We have actually used both Neuron_reduce and NEAT in the lab and they are great for morphological reduction (but they have some difference in how they achieve this). C) Kudos for that!! D) The reason why we did not have a direct comparison with other simulators like NEST, Arbor & NEURON is because in Dendrify we model dendritic spikes in an event-driven fashion (without the HH equations) by taking advantage of Brian's custom events. To make this comparison fair, we would need to implement (and optimize) this functionality in all the above simulators, which would be super time-consuming (I am not an expert in any of these) and also beyond the scope of this project. But we did add in the final published manuscript a Figure showing a rough estimate of how long it takes to build AND run 3 different models of varying size (see https://www.nature.com/articles/s41467-022-35747-8/figures/6). We wanted to simulate a "real-life" scenario that also takes into account the development and testing phase of a model, not just its final, optimized version. That's why we did not use any tricks like C++ code generation or GPU acceleration.
E) We are on the same page. But to be fair with all teams that develop simulation tools, this "tool guide" should be decided here collectively and not be compiled by me 😅. F) No problem. If you have any further recommendations or issues you would like to address, feel free to do it. Thank's. |
Hello and thanks for this lively discussion. Another Arborist here, just to clarify. It's very encouraging to see people actively working on bridging
isn't generally applicable, thus distorting the conclusion. There's no fundamental issue preventing Neuron |
Hi @thorstenhater, thank your for your comment. I think that all this discussion regarding point A started from a misunderstanding, that shifts the focus away from what we wanted to achieve with Dendrify. At the end of the Introduction we write:
We clearly mention that what we propose can be implemented in ANY simulator. Moreover this is the first paragraph of the Discussion:
The whole idea behind Dendrify was to highlight the importance of dendrites in biological neuronal networks and provide a theoretical guide and a tool to help developing efficient yet biologically-plausible neuronal models. The only claims regarding any performance advantages are in comparison to detailed biophysical neuron models. We NEVER mention anything regarding performance advantages over other simulators. That's why we don't have any comparative benchmarks for that. Dendrify does not compete with other simulators and we certainly do not try to cancel them. In fact, we not only make heavy usage of other simulators in the lab, but Dendrify could have been implemented in any of them. Isolating a few words to accuse us indirectly that we want to mislead the community and cancel the efforts of other teams is utterly unfounded, outrageous and unacceptable. The aim of this project is entirely different than what you make it look like. The world does not revolve around Arbor and I will not continue this fruitless discussion. If you have any complaints about our paper, please contact the corresponding author and if they are valid I commit that we will make amends for that. |
I know this is seen as trickery by some, but that only supports my initial remark on the difficulties of scaling NEURON. These things are not really all that tricky or modern, they're old news in many other branches of simulation. I agree, this is difficult when using NEURON, but that's only a reason to not use that, not to say that it's fundamentally hard or slow.
I'm not sure I get it still: is 'other simulators' including biophysically detailed simulators or not? Because those benchmarks would be in support of your claim (probably), but they are not present. Without them, you really can't say. And thanks for the ref in the published paper. We help people benchmark Arbor vs NEURON sometimes, so we know how nontrivial that can be. But, even with NEURON, there's usually some easy wins that will improve the wall time by a few factors, maybe an order. I understand your focus is more on the scientific study enabled by Dendrify than performance, so I won't belabor the point any further, but I did not put the claim there ;) |
Hi everyone, this discussion got a bit more heated than I expected 😮 I don't think we have ever adopted a formal "Code of Conduct" in the Software WG (@sanjayankur31: maybe we should?), but just a reminder that discussions here should remain respectful; in particular, don't be condescending and assume good faith. Let me say a few words regarding the content, since I have been peripherally involved in dendrify's creation via discussions with @mpgl: I probably wouldn't have formulated a few statements as strongly (e.g. claiming multi-compartment models as "unsuitable" for large-scale models; this of course depends on the kind of software/hardware you have, and by what you call "large-scale"). But, I agree with @mpgl here that I don't see anything in the paper that makes any claim regarding performance that is contentious or needs further justification. And the paper certainly doesn't discard detailed simulations. Let me rephrase what it says from my understanding: it claims that the vast majority of large network simulations use point models and therefore ignore the role of dendrites (obviously true), and that models taking into account dendrites typically use complex multi-compartmental models with detailed channel descriptions which are much slower to simulate (maybe less obvious, but I think clearly true). It then proposes a new, intermediate approach and its implementation for the Brian simulator. Importantly, the proposed approach is not just a downscaling of a complex morphology, but rather an extension of I&F models to multiple compartments (which needs non-standard mechanisms for the axial currents, etc.), and a simplified model of dendritic spikes. I've seen approaches that were similar to this in parts, but I've never seen anything like this available for any simulator I know of (e.g. NEST's only multi-compartment I&F model is very different). As @mpgl and authors say in their paper, in principle you could implement this for any other simulator (I'm sure you can make this work via NMODL in NEURON or Arbor). I'd argue that it is easier to do in Brian than in most simulators, but that's besides the point. Given all that, I don't see any need to benchmark against any other tool. There are two comparisons that you could potentially do: 1. you benchmark the dendrify/Brian simulations against other simulators using the same approach. That could make sense, but I don't think any other simulator has this approach built-in. 2. you can benchmark this approach against more complex models (e.g. a 5-compartment model with active dendrites, but using more detailed HH-type equations instead of an I&F approach). Ok, but it is kind of obvious that it will be faster, no? . Even if – and I got the feeling that this is what @brenthuisman and @thorstenhater assume Hopefully we did not scare away @mpgl from presenting the work at the Software WG meeting 😬 It would certainly be interesting to many in the group – and I'm sure there will be some discussions 😉 PS:
I think there's a misunderstanding here: @mpgl 's remark "we did not use any tricks like C++ code generation or GPU acceleration" referred to Brian's "tricks", i.e. the dendrify benchmarks in the paper uses Brian's basic pure-Python mode, instead of enabling C++/OpenMP or CUDA code generation. [Edited out a ranty bit that wasn't really adding anything helpful] |
@mstimberg Thanks for pointing out the existence of the reference to Willem Wybo, I thought I checked for it but apparently not, my bad. I still think ending with "making it mathematically intractable and impractical for use in SNNs." is a strong claim, which is setting the reader up for a demonstration of it, but let's let that rest for now and see when the session will be planned. I will poke Willem to see if he can make it there. |
Yes. I think we all agree that we want to be able to discuss all our work freely, but there are a few very important bits to keep in mind when engaging in any debate/discussion (not specifically due to the discussion here---just in general). The cardinal rule is to "be excellent to each other": If there's any doubt at all that one is not being excellent to anyone, one should please reconsider their response. I've filed #123 for this now. Getting back to the session here, we'd love to hear about dendrify and yes, a good discussion at the end of the session would be great. @mpgl : would you have a preference for the time of the session? We usually do about 1700 UTC to make it accessible to as wide an audience as possible, and we record the session and put it up on the INCF training space for others to watch at their convenience. Would Monday/Tuesday the week after do, 6th/7th of February? That'll also give us time to add a post to the website and publicise it and so on. |
Hi @sanjayankur31, 1700 UTC is 7 pm here in Greece. 1600 UTC would be perhaps more convenient for me, but if this is a better time slot for everyone else, I really wouldn't mind starting then. Regarding the date, 7th of February would be great. So what is the format exactly and what should I prepare beforehand? Thank's. |
Sure, 1600 should be fine too. That's 0800 on the US west coast, which is doable. Great, let's do 7th of Feb then. I'll go set up a zoom meeting. It's usually an hour, where you have 45 minutes to present: you can either do a presentation, or show a demo, or a combination of both and so on. It's really quite informal. Then the last 15 minutes are for discussion and questions. Could you please write a short abstract that we can use for the announcements on the website/mailing list/social media? I can even use the one from the paper, if that works for you? |
@sanjayankur31 if you think that 0800 is perhaps too early for a lot of people, we can do it at 1600 UTC. Whatever you prefer. You may also use this abstract: |
Let's do 1600 UTC on 7th Feb. I'll set up a zoom meeting and put up a post on the website etc. ASAP. |
PR with post, zoom link, and calendar invite is up: #124 |
Sent an announcements to the various mailing lists. You should get them in a few hours after they've gone through moderation. I'll send a reminder out to the wg's mailing list next week too (but not to the other lists). Please do forward the announcements to your colleagues. |
@OCNS/software-wg : I've started the zoom meeting now. Please join at your convenience: https://ucl.zoom.us/j/91907842769?pwd=bnEzTU9Eem9SRmthSjJIRElFZ0xwUT09 |
(It looks like one needs a zoom account to view recordings---I can't seem to turn this setting off). |
I had someone contact me during the talk who failed to join for the same reason, i.e. the zoom meeting was only accessible for authenticated users (and they did not want to sign up for a zoom account for some reason). I wonder whether there's a better option for future meetings? Regarding the recording, the idea is to also get it into the INCF training space where it would be accessible without constraints, right? |
Yes, I've passed it on to the INCF folks, so it'll be on their Youtube channels when uploaded. I don't know if one can use Youtube without an account. It's a hard one---every platform will require some form of registration for recording, or require us to set up our own recording. If it's not zoom, it'll be google meet or jitsi or bigbluebutton. If folks don't want to set up accounts, that's fine, but we are limited to the services that are available to us. There's not a lot else we can do (there are probably folks who don't want to sign up for GitHub and so don't participate in lots of development too, but then we can't all host our own git forges etc. either) If someone wants to look into alternatives, totally happy to switch as long as it doesn't require me to do any more additional work for the recording :) |
CC: @OCNS/software-wg
Saw the pre-print, and I think a software highlight would fit well for this:
Dendrify: a new framework for seamless incorporation of dendrites in Spiking Neural Networks | bioRxiv
Compatible with Brain
The text was updated successfully, but these errors were encountered: