Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev session/Software highlight: Dendrify: a new framework for seamless incorporation of dendrites in Spiking Neural Networks #93

Open
sanjayankur31 opened this issue May 10, 2022 · 26 comments
Assignees
Labels
C: DevSessions Component: developer sessions C: Software Highlights Sessions highlighting a particular software tool P: low Priority: low S: Recordings pending Status: Recordings need to be uploaded S: Recordings to be announced S: Recordings have been uploaded and need to be announced S: Scheduled Status: scheduled S: WIP Status: work in progress

Comments

@sanjayankur31
Copy link
Collaborator

CC: @OCNS/software-wg

Saw the pre-print, and I think a software highlight would fit well for this:

Dendrify: a new framework for seamless incorporation of dendrites in Spiking Neural Networks | bioRxiv

Compatible with Brain

@sanjayankur31 sanjayankur31 added C: DevSessions Component: developer sessions S: Needs comment Status: needs comments P: low Priority: low S: Needs to be announced Status: needs to be announced/publicised S: Needs location Status: needs a location: online URL etc. S: Needs scheduling Status: needs to be scheduled S: Needs web post Status: needs a post on website S: help wanted We could use some help on these C: Software Highlights Sessions highlighting a particular software tool labels May 10, 2022
@mstimberg
Copy link
Collaborator

Compatible with Brain

And with Brian as well 😏

@mpgl
Copy link

mpgl commented Jan 17, 2023

Hi everyone,
thank you so much for inviting me to talk about Dendrify. It's an honour for me. @mstimberg asked me to choose the topic for a presentation, but I would like to discuss it with you so that we choose something that is interesting for you and the community. Some ideas include:

  1. Talk about dendrites, their properties and what advantages they bring on the field of SNNs.
  2. Dendrify tutorial. How to develop a network of biologically realistic compartmental neurons.
  3. <insert topics that I haven't thought of>

Please let me know of what you think :)

@brenthuisman
Copy link

The abstract and paper make a strong claim: "Therefore, this category of models is unsuitable for large network simulations, where computational efficiency is a key priority." I note there is no reference serving as the basis for this claim, nor any comparative benchmark to any such tool. A sharp reviewer would ask you for that.

This claim is probably based on informal observations that people struggle to scale NEURON simulations to networks of sizes point neuron simulators trivially reach. We at team Arbor think this is an artifact of outdated tooling, not a fundamental concern. Naturally, a middle ground that abstracts over some of the biophysics is going to save computational cycles, and therefore time, but that is a modelling (and tooling) choice, not so much a fundamental constraint of simulating large networks with biophysically detailed neurons. Using Arbor, there is no difference between using your machine and a supercomputer. In addition, tools like Arbor and Neuron both allow the user to scale the granularity and thus number of diff. eqns to be solved up and down. It would be interesting to see how Dendrify compares against a course Arbor or Neuron simulation for instance.

So, some substance to the performance claim (and maybe achievable networks sizes) would be good for the target audience to have: for which parameters is Dendrify a solid choice and for which perhaps not?

Also, in the intro about dendrites, you could discuss what Dendrify omits, for people coming from the other end (SNNs with fully detailed neurons).

@mpgl
Copy link

mpgl commented Jan 18, 2023

Hi thank you for you comment and your suggestions. You indeed raise some valid points that I will try to respond to.

A) I stand by the claim that you mentioned because it's true!! If you read the entire paragraph we write:

Conversely, biophysical models of neurons with a detailed morphology are ideal for studying how dendritic processing affects neuronal computations at the single-cell level 16. Such models comprise hundreds of compartments, each furnished with numerous ionic mechanisms to faithfully replicate the electrophysiological profile of simulated neurons. However, achieving high model accuracy is typically accompanied by increased complexity (e.g., higher CPU/GPU demands and larger run times), as numerous differential equations have to be solved at each simulation time step 16. Therefore, this category of models is unsuitable for large-network simulations, where computational efficiency is a key priority.

Do you disagree with the claim that more complex neuron models with detailed morphology and ionic mechanisms have higher computational cost? There is no comparison between any tools here. It's just a general comment on how performance is affected when increasing complexity.

B) You mention: "Naturally, a middle ground that abstracts over some of the biophysics is going to save computational cycles, and therefore time, but that is a modelling (and tooling) choice, not so much a fundamental constraint of simulating large networks with biophysically detailed neurons." Yes!! I totally agree on that. Dendrify is not just a tool, but also a theoretical framework that tries to guide people on how to create simplistic models (few-compartments) that reproduce as many key dendritic features as possible. We also note in the manuscript that:

Notably, the proposed guide builds upon established theoretical work 28,29,31, and its implementation is not exclusive to any simulator software.

You can take the principles we mention in the manuscript to design your model on any simulator software of your choice.

C) "Using Arbor, there is no difference between using your machine and a supercomputer. ". What exactly do you mean with that? Is there any reference for that?

D) "In addition, tools like Arbor and Neuron both allow the user to scale the granularity and thus number of diff. eqns to be solved up and down. It would be interesting to see how Dendrify compares against a course Arbor or Neuron simulation for instance." I would like to clarify that Dendrify is not a simulator. It's just a plugin for Brian 2 that helps create simplistic compartmental models with event-driven spiking mechanisms. All the performance benefits of our approach come a) From model and mechanisms simplification, b) Brian's optimizations. For example, models made with Dendrify+Brian are also run on NVIDIA GPUs thanks to Brian2CUDA. Notably the same model that can run on a GPU runs also on my iPad with only 1 code line being different. In this project we also cared a lot about flexibility and ease of use. Is our approach the fastest possible? I have no idea. But I do know it's fast enough, flexible and easy to learn.

E) So, some substance to the performance claim (and maybe achievable networks sizes) would be good for the target audience to have: for which parameters is Dendrify a solid choice and for which perhaps not? That's actually a really nice idea and I will definitely include it in future presentations or tutorials. Thank's a lot for that.

F) Also, in the intro about dendrites, you could discuss what Dendrify omits, for people coming from the other end (SNNs with fully detailed neurons). Thank's for the suggestion. We have 2 paragraphs just for that in the discussion:

It is important to note that the presented modeling framework does not come without limitations. First, reduced compartmental models cannot compete with morphologically detailed models in terms of spatial resolution. More specifically, in neuronal models with detailed morphologies, each dendritic section consists of several segments used to ensure numerical simulation stability and allow more sophisticated and realistic synaptic placement. By contrast, with Dendrify, we aim to simply extend the point-neuron model by adding a few compartments that account for specific regions in the dendritic morphology. Another limitation is that Dendrify currently depends on Brian’s explicit integration methods to solve the equations of the reduced compartmental models. While this approach improves performance, it limits the number of compartments that can be simulated without loss of numerical accuracy 92. Since Dendrify is commonly used for neuron models with a small number of big compartments, we expect that explicit approaches and a reasonable simulation time step would not cause any substantial numerical issues. To test this, we directly compared Dendrify against SpatialNeuron (which utilizes an implicit method) using an adapted version of the four-compartment model shown in Fig. 3. We show a model with few dendritic compartments and a relatively small integration time step (dt ≤ 0.1 ms), results in almost identical responses to Brian’s SpatialNeuron (Supplementary Figs. 817).

Another limitation pertains to our event-based implementation of spikes. Since we do not utilize the HH formalism, certain experimentally observed phenomena cannot be replicated by the standard models provided with Dendrify. These include the depolarization block emerging in response to strong current injections 93 or the reduction of backpropagation efficiency observed in some neuronal types during prolonged somatic activity 68. Moreover, the current version of Dendrify supports only Na+ and partially Ca2+ VGICs and ignores another known ion channel types 94. Finally, synaptic plasticity rules must be manually implemented using standard Brian 2 objects. However, Dendrify is a project in continuous development, and based on the community feedback, many new features or improvements will be included in future updates.

@mstimberg
Copy link
Collaborator

Hi everyone. I am obviously biased and sympathetic towards the Brian2/Dendrify approach (surprise 😉 ). But taking a quick step back from the ongoing discussion here: I think it would be great to follow the dendrify presentation in the Software WG by an open discussion of this topic (e.g. "what is the right level of detail for spiking neural networks?"). Of course the trivial answer is "It depends", but I think there are some non-trivial things to say about it as well.

@mpgl
Copy link

mpgl commented Jan 18, 2023

@mstimberg Exactly, even the term SNN might have a different meaning depending on who you ask. I biologist might understand something fully detailed that resembles a brain slice, while a neuroAI researcher something like a deep net of point spiking neurons. Different levels of abstractions serve different purposes but are all useful for the scientific community. Model detail should be adjusted always based on the question you want to answer.

@brenthuisman
Copy link

A) Your claim is "Therefore, this category of models is unsuitable for large-network simulations, where computational efficiency is a key priority." I can't find backup of that claim.

B) Good :) In fact, there tools that 'scale' morphologies for biophys simulators (Wybo's NEAT and Cuntz's TREES come to mind).

C) It's a design decision that was basically the reason to start the project :) Modelling with Arbor is fundamentally independent from the execution, and executing Arbor on HPC (GPU or no) is as simple as on your machine. No coding or restructuring of the simulation required, just let Arbor know.

D) This goes back to the initial claim: if you say you are faster, you're supposed to say by how much and in which conditions. That's what makes a publication a publication.

E) Great! In the end you want to show things you can do, and position your tool; in which cases researchers should be interested and in which perhaps consider other alternatives.

F) I hadn't gotten this far, thanks!

@brenthuisman
Copy link

@mstimberg It harkens back to the discussion for the choose-your-simulator-tool: what's the right level? In some cases an immediate answer may be possible, but in many not. Potentially, intermediate tools or modes such as Dendrify/NEAT/TREES, but also co-sim efforts, could help a researcher explore and find that right level. I suppose the difficulty in actually validating that answer to this question makes it thus also hard to discuss in objective terms. I think it is simply a choice you make, and correct if you think you need to: the level of the simulator that you pick? Maybe a detailed decision tree, based on our collective experiences and knowledge, is an idea?

@mpgl
Copy link

mpgl commented Jan 18, 2023

@brenthuisman

A) Please explain to me why you find this claim problematic or why it requires further justification. Not all students or researchers have access to High Performance Computing Clusters or the resources of the Human/Blue Brain Project. How can an average PC run a network of N >= 10^4 if each neuron model consists of >10^3 segments, >10^4 synapses and a realistic distribution of numerous ion channels. I hope that you don't see this claim as an attempt to disregard efforts like Arbor, because it's really not. Our goal was to simply extend point-neuron models so that they respect some fundamental biological mechanisms that are arguably ignored in the vast majority of SNNs studies out there. Not to substitute detailed biophysical models, or any established simulator software.

B) Let me include Neuron_reduce in your list. We have actually used both Neuron_reduce and NEAT in the lab and they are great for morphological reduction (but they have some difference in how they achieve this).

C) Kudos for that!!

D) The reason why we did not have a direct comparison with other simulators like NEST, Arbor & NEURON is because in Dendrify we model dendritic spikes in an event-driven fashion (without the HH equations) by taking advantage of Brian's custom events. To make this comparison fair, we would need to implement (and optimize) this functionality in all the above simulators, which would be super time-consuming (I am not an expert in any of these) and also beyond the scope of this project. But we did add in the final published manuscript a Figure showing a rough estimate of how long it takes to build AND run 3 different models of varying size (see https://www.nature.com/articles/s41467-022-35747-8/figures/6). We wanted to simulate a "real-life" scenario that also takes into account the development and testing phase of a model, not just its final, optimized version. That's why we did not use any tricks like C++ code generation or GPU acceleration.

We have shown that reduced compartmental I&F models, equipped with active, event-driven dendritic mechanisms, can reproduce numerous realistic dendritic functions. However, point-neuron models are currently the gold standard for SNN research thanks to their simplicity, efficiency, and scalability. To assess the viability of our approach for large-network simulations, we tested how Dendrify’s performance scales with increasing network size and complexity (Fig. 6). It is important to note that since simulation performance depends on multiple factors such as model complexity, hardware specifications, and case-specific optimizations (e.g., C++ code generation 42 or GPU acceleration 72,73), designing a single most representative test is unrealistic. For the sake of simplicity and to replicate a real-world usage scenario, all simulations presented in this section were performed on an average laptop using standard and widely used Python tools (Supplementary Table 4). We also run the most demanding test case on an iPad to showcase our approach’s universal compatibility and low computational cost.

E) We are on the same page. But to be fair with all teams that develop simulation tools, this "tool guide" should be decided here collectively and not be compiled by me 😅.

F) No problem. If you have any further recommendations or issues you would like to address, feel free to do it. Thank's.

@thorstenhater
Copy link

Hello and thanks for this lively discussion.

Another Arborist here, just to clarify. It's very encouraging to see people actively working on bridging
detailled and point models.
I think @brenthuisman's concern with point A is one of precision, not of principle. The way it's written
seems to discard detailled simulations -- and consequently Neuron, Arbor, and Co -- completely. In particular

Such models comprise hundreds of compartments, each furnished with numerous ionic mechanisms to faithfully replicate the electrophysiological profile of simulated neurons.

isn't generally applicable, thus distorting the conclusion. There's no fundamental issue preventing Neuron
or Arbor running coarse-grained morphologies using approximate ion dynamics. Up (down?) to using a single
compartment and basic dynamics. The real test would be to compare performance of similarly detailled models
across different simulators.

@mpgl
Copy link

mpgl commented Jan 18, 2023

Hi @thorstenhater, thank your for your comment.

I think that all this discussion regarding point A started from a misunderstanding, that shifts the focus away from what we wanted to achieve with Dendrify. At the end of the Introduction we write:

Notably, the proposed guide builds upon established theoretical work 28,29,31, and its implementation is not exclusive to any simulator software.

We clearly mention that what we propose can be implemented in ANY simulator.

Moreover this is the first paragraph of the Discussion:

Establishing a rapport between biological and artificial neural networks is necessary for understanding and hopefully replicating our brain’s superior computing capabilities 4,5,74. However, despite decades of research revealing the central role of dendrites in neuronal information processing 10,11,16,43, the dendritic contributions to network-level functions remain largely unexplored. Dendrify aims to promote the development of realistic spiking network models by providing a theoretical framework and a modeling toolkit for efficiently adding bioinspired dendritic mechanisms to SNNs. This is materialized by developing simplified yet biologically accurate neuron models optimal for network simulations in the Brian 2 simulator42.

The whole idea behind Dendrify was to highlight the importance of dendrites in biological neuronal networks and provide a theoretical guide and a tool to help developing efficient yet biologically-plausible neuronal models. The only claims regarding any performance advantages are in comparison to detailed biophysical neuron models. We NEVER mention anything regarding performance advantages over other simulators. That's why we don't have any comparative benchmarks for that.

Dendrify does not compete with other simulators and we certainly do not try to cancel them. In fact, we not only make heavy usage of other simulators in the lab, but Dendrify could have been implemented in any of them. Isolating a few words to accuse us indirectly that we want to mislead the community and cancel the efforts of other teams is utterly unfounded, outrageous and unacceptable. The aim of this project is entirely different than what you make it look like. The world does not revolve around Arbor and I will not continue this fruitless discussion. If you have any complaints about our paper, please contact the corresponding author and if they are valid I commit that we will make amends for that.

@brenthuisman
Copy link

We wanted to simulate a "real-life" scenario that also takes into account the development and testing phase of a model, not just its final, optimized version. That's why we did not use any tricks like C++ code generation or GPU acceleration.

I know this is seen as trickery by some, but that only supports my initial remark on the difficulties of scaling NEURON. These things are not really all that tricky or modern, they're old news in many other branches of simulation. I agree, this is difficult when using NEURON, but that's only a reason to not use that, not to say that it's fundamentally hard or slow.

The only claims regarding any performance advantages are in comparison to detailed biophysical neuron models. We NEVER mention anything regarding performance advantages over other simulators. That's why we don't have any comparative benchmarks for that.

I'm not sure I get it still: is 'other simulators' including biophysically detailed simulators or not? Because those benchmarks would be in support of your claim (probably), but they are not present. Without them, you really can't say. And thanks for the ref in the published paper. We help people benchmark Arbor vs NEURON sometimes, so we know how nontrivial that can be. But, even with NEURON, there's usually some easy wins that will improve the wall time by a few factors, maybe an order.

I understand your focus is more on the scientific study enabled by Dendrify than performance, so I won't belabor the point any further, but I did not put the claim there ;)

@mstimberg
Copy link
Collaborator

mstimberg commented Jan 19, 2023

Hi everyone, this discussion got a bit more heated than I expected 😮 I don't think we have ever adopted a formal "Code of Conduct" in the Software WG (@sanjayankur31: maybe we should?), but just a reminder that discussions here should remain respectful; in particular, don't be condescending and assume good faith.

Let me say a few words regarding the content, since I have been peripherally involved in dendrify's creation via discussions with @mpgl: I probably wouldn't have formulated a few statements as strongly (e.g. claiming multi-compartment models as "unsuitable" for large-scale models; this of course depends on the kind of software/hardware you have, and by what you call "large-scale"). But, I agree with @mpgl here that I don't see anything in the paper that makes any claim regarding performance that is contentious or needs further justification. And the paper certainly doesn't discard detailed simulations. Let me rephrase what it says from my understanding: it claims that the vast majority of large network simulations use point models and therefore ignore the role of dendrites (obviously true), and that models taking into account dendrites typically use complex multi-compartmental models with detailed channel descriptions which are much slower to simulate (maybe less obvious, but I think clearly true). It then proposes a new, intermediate approach and its implementation for the Brian simulator. Importantly, the proposed approach is not just a downscaling of a complex morphology, but rather an extension of I&F models to multiple compartments (which needs non-standard mechanisms for the axial currents, etc.), and a simplified model of dendritic spikes. I've seen approaches that were similar to this in parts, but I've never seen anything like this available for any simulator I know of (e.g. NEST's only multi-compartment I&F model is very different). As @mpgl and authors say in their paper, in principle you could implement this for any other simulator (I'm sure you can make this work via NMODL in NEURON or Arbor). I'd argue that it is easier to do in Brian than in most simulators, but that's besides the point.

Given all that, I don't see any need to benchmark against any other tool. There are two comparisons that you could potentially do: 1. you benchmark the dendrify/Brian simulations against other simulators using the same approach. That could make sense, but I don't think any other simulator has this approach built-in. 2. you can benchmark this approach against more complex models (e.g. a 5-compartment model with active dendrites, but using more detailed HH-type equations instead of an I&F approach). Ok, but it is kind of obvious that it will be faster, no? . Even if – and I got the feeling that this is what @brenthuisman and @thorstenhater assume ☺️ – the dendrify/Brian simulation would be slower than a more complex Arbor solution, so what? It would still be faster than the more complex simulation in Brian (which a user might prefer for other reasons), and if someone implemented it for Arbor, it would be faster than the more complex simulation again.

Hopefully we did not scare away @mpgl from presenting the work at the Software WG meeting 😬 It would certainly be interesting to many in the group – and I'm sure there will be some discussions 😉

PS:

We wanted to simulate a "real-life" scenario that also takes into account the development and testing phase of a model, not just its final, optimized version. That's why we did not use any tricks like C++ code generation or GPU acceleration.

I know this is seen as trickery by some, but that only supports my initial remark on the difficulties of scaling NEURON. These things are not really all that tricky or modern, they're old news in many other branches of simulation. I agree, this is difficult when using NEURON, but that's only a reason to not use that, not to say that it's fundamentally hard or slow.

I think there's a misunderstanding here: @mpgl 's remark "we did not use any tricks like C++ code generation or GPU acceleration" referred to Brian's "tricks", i.e. the dendrify benchmarks in the paper uses Brian's basic pure-Python mode, instead of enabling C++/OpenMP or CUDA code generation.

[Edited out a ranty bit that wasn't really adding anything helpful]

@brenthuisman
Copy link

@mstimberg Thanks for pointing out the existence of the reference to Willem Wybo, I thought I checked for it but apparently not, my bad.

I still think ending with "making it mathematically intractable and impractical for use in SNNs." is a strong claim, which is setting the reader up for a demonstration of it, but let's let that rest for now and see when the session will be planned. I will poke Willem to see if he can make it there.

@sanjayankur31
Copy link
Collaborator Author

Hi everyone, this discussion got a bit more heated than I expected open_mouth I don't think we have ever adopted a formal "Code of Conduct" in the Software WG (@sanjayankur31: maybe we should?), but just a reminder that discussions here should remain respectful; in particular, don't be condescending and assume good faith.

Yes. I think we all agree that we want to be able to discuss all our work freely, but there are a few very important bits to keep in mind when engaging in any debate/discussion (not specifically due to the discussion here---just in general). The cardinal rule is to "be excellent to each other": If there's any doubt at all that one is not being excellent to anyone, one should please reconsider their response.

I've filed #123 for this now.


Getting back to the session here, we'd love to hear about dendrify and yes, a good discussion at the end of the session would be great.

@mpgl : would you have a preference for the time of the session? We usually do about 1700 UTC to make it accessible to as wide an audience as possible, and we record the session and put it up on the INCF training space for others to watch at their convenience. Would Monday/Tuesday the week after do, 6th/7th of February? That'll also give us time to add a post to the website and publicise it and so on.

@mpgl
Copy link

mpgl commented Jan 24, 2023

Hi @sanjayankur31,

1700 UTC is 7 pm here in Greece. 1600 UTC would be perhaps more convenient for me, but if this is a better time slot for everyone else, I really wouldn't mind starting then. Regarding the date, 7th of February would be great.

So what is the format exactly and what should I prepare beforehand? Thank's.

@sanjayankur31 sanjayankur31 added S: WIP Status: work in progress S: Scheduled Status: scheduled and removed S: Needs scheduling Status: needs to be scheduled labels Jan 24, 2023
@sanjayankur31
Copy link
Collaborator Author

Sure, 1600 should be fine too. That's 0800 on the US west coast, which is doable. Great, let's do 7th of Feb then. I'll go set up a zoom meeting.

It's usually an hour, where you have 45 minutes to present: you can either do a presentation, or show a demo, or a combination of both and so on. It's really quite informal. Then the last 15 minutes are for discussion and questions.

Could you please write a short abstract that we can use for the announcements on the website/mailing list/social media? I can even use the one from the paper, if that works for you?

@mpgl
Copy link

mpgl commented Jan 24, 2023

@sanjayankur31 if you think that 0800 is perhaps too early for a lot of people, we can do it at 1600 UTC. Whatever you prefer.

You may also use this abstract:
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models can be suboptimal for practical applications due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify allows the creation of reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.

@sanjayankur31
Copy link
Collaborator Author

Let's do 1600 UTC on 7th Feb. I'll set up a zoom meeting and put up a post on the website etc. ASAP.

sanjayankur31 added a commit that referenced this issue Jan 27, 2023
@sanjayankur31
Copy link
Collaborator Author

PR with post, zoom link, and calendar invite is up: #124

@sanjayankur31
Copy link
Collaborator Author

Sent an announcements to the various mailing lists. You should get them in a few hours after they've gone through moderation. I'll send a reminder out to the wg's mailing list next week too (but not to the other lists). Please do forward the announcements to your colleagues.

@sanjayankur31
Copy link
Collaborator Author

@OCNS/software-wg : I've started the zoom meeting now. Please join at your convenience:

https://ucl.zoom.us/j/91907842769?pwd=bnEzTU9Eem9SRmthSjJIRElFZ0xwUT09

@sanjayankur31
Copy link
Collaborator Author

Recordings: https://ucl.zoom.us/rec/share/beUgDj18aaVYZbn3sdEvhwbEnhpsFUvpGkXR5UIudgVVVqBQyS9BDpSoowXDR_FZ.kpxopzfuDoXO27Kj

(It looks like one needs a zoom account to view recordings---I can't seem to turn this setting off).

@sanjayankur31 sanjayankur31 self-assigned this Feb 7, 2023
@sanjayankur31 sanjayankur31 added S: Recordings pending Status: Recordings need to be uploaded S: Recordings to be announced S: Recordings have been uploaded and need to be announced and removed S: Needs comment Status: needs comments S: Needs to be announced Status: needs to be announced/publicised S: Needs location Status: needs a location: online URL etc. S: Needs web post Status: needs a post on website S: help wanted We could use some help on these labels Feb 7, 2023
@mstimberg
Copy link
Collaborator

(It looks like one needs a zoom account to view recordings---I can't seem to turn this setting off).

I had someone contact me during the talk who failed to join for the same reason, i.e. the zoom meeting was only accessible for authenticated users (and they did not want to sign up for a zoom account for some reason). I wonder whether there's a better option for future meetings? Regarding the recording, the idea is to also get it into the INCF training space where it would be accessible without constraints, right?

@sanjayankur31
Copy link
Collaborator Author

Yes, I've passed it on to the INCF folks, so it'll be on their Youtube channels when uploaded. I don't know if one can use Youtube without an account.

It's a hard one---every platform will require some form of registration for recording, or require us to set up our own recording. If it's not zoom, it'll be google meet or jitsi or bigbluebutton. If folks don't want to set up accounts, that's fine, but we are limited to the services that are available to us. There's not a lot else we can do (there are probably folks who don't want to sign up for GitHub and so don't participate in lots of development too, but then we can't all host our own git forges etc. either)

If someone wants to look into alternatives, totally happy to switch as long as it doesn't require me to do any more additional work for the recording :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: DevSessions Component: developer sessions C: Software Highlights Sessions highlighting a particular software tool P: low Priority: low S: Recordings pending Status: Recordings need to be uploaded S: Recordings to be announced S: Recordings have been uploaded and need to be announced S: Scheduled Status: scheduled S: WIP Status: work in progress
Projects
None yet
Development

No branches or pull requests

5 participants