Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Octree level of detail #122

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft

[WIP] Octree level of detail #122

wants to merge 6 commits into from

Conversation

agurvich
Copy link
Collaborator

The octree would look a lot better if, instead of showing centers of mass, we showed subsets of the particle data.

closes #116 , closes #117

agurvich added 6 commits June 6, 2022 13:51
have flag 'use_lod'  to recover CoM behavior
the binary writer was overwriting the octree json.
also shuffle so that the lod isn't in chunks
after adding it in (octree_use_lod and octree_field_names) we have liftoff!
we've reached a crossroads and i need to decide if i totally rewrite the octree implementation on the javascript side or try and forget the cursed  knowledge i have
@agurvich agurvich added the work in progress not yet ready to be merged label Jun 14, 2022
@agurvich agurvich marked this pull request as draft June 14, 2022 15:19
@agurvich
Copy link
Collaborator Author

ok, so I did this with one level which was the lowest hanging fruit. The idea being that rather than having the top level particle mesh contain centers of mass it just contained a decimated version of the entire dataset that's shown all the time.

Unfortunately, these particles are too small to see from a distance and you can't differentially make the particles bigger/smaller for individual nodes without associating them with individual nodes (a tautological statement). But, if we're associating them with individual nodes then we will have to have a framework for nodes having multiple layers of particles.

There's 2 ways to do that:

  1. the klugey way that is not extendable, have an array of particle indices (w.r.t. to the decimated dataset) stored in each node that are visited when the node is opened/closed to make those (and only those) particles' radiusScale value larger/smaller. This would probably be slow since you'd have to loop through the index array and change each corresponding radiusScale value individually (vs. changing 1 number).
  2. having a separate mesh for each particle associated with the node. then you could just update the PsizeMult for that mesh and scale all the necessary particles at once.

2 would be nice because then you're really just keeping track of additional meshes for each node. In principle then we could have each node have a list of meshes that we append to as we zoom in closer and closer (rather than having just a single mesh). only problem is that the number of meshes in the scene could balloon which could introduce performance issues (especially in the render loop when we have to loop over every mesh to update stuff.

@agurvich
Copy link
Collaborator Author

the easiest way to handle this would be for each node to have its own .ffly-like file that we could read slices of. But... because of the way .ffly files are structured (Coordinates_flat,Velocities_flat,scalar_field1,scalar_field2,...) this would be very difficult to do because we'd need to read with a stride. instead having individual files of just the flattened coordinates, the flattened velocities, and then each scalar field would make it very easy to read subsets of the particle data. So then each node would have a directory and in it would be ~a dozen files (depending on how many scalar fields you have).

This sounds like chaos but would simultaneously make it really easy to append to an octree i.e. for octree streaming for Gaia DR3 and other large datasets. I'm hesitant to do this before submitting the draft because it would really be ripping out the entire octree framework i spent the better part of 3 weeks working on and messing with it again.

@agurvich
Copy link
Collaborator Author

agurvich commented Jul 5, 2022

in my spare shower thought time, i've realized that if i just write my own binary reader without kaitai then i can just read substrings of .ffly files using clever byte offsets to get each of the field arrays I want. i'll probably want to start from scratch with a new branch/PR though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
work in progress not yet ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add level of detail mode to octree replace octree CoM with decimated version of the dataset
1 participant