-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Time-dependent problem at de Gerlache crater #55
Comments
I'm running these tests on my side, making meshes at different resolutions, etc ... here is a first plot at 500m/px for max values over 1 month (it seems that I'm using https://github.com/steo85it/raster2mesh to generate the meshes, I can check if that can be easily called from your example. |
Gotcha. I like the look of these plots, they are interesting. How big are these meshes? And where are the scripts you're currently using? It will be helpful if you can share the code you use to call SPICE to generate sun positions. |
I run it with this script from the "applications repo" I shared some time ago. |
About the code for SPICE, these lines should be useful. These could be useful, too. (Btw, I made a few adjustments to paths, etc... today. I'll push them if useful, but it doesn't change much.) Also, if you have trouble understanding what I'm doing in that mess, feel free to ask (even though it would maybe be easier for me to unpack it). |
For a start, I'd replace the content of |
See updates in gerlache branch in github/python-flux for the steps you mentioned (generate mesh - resolution is argument - and convert to cartesian). |
I see the imperfections. Since there are speckles in the first plot, I guess it has to do with using Embree? So that I understand: to make your mesh, you load Mike's DEM as a point cloud and then use a tool to convert a point cloud into a 3D mesh? I was planning on using meshpy with a user refinement function which checks the area of each 3D triangle. This way it should be possible to use one threshold inside an ROI containing the crater, and another outside for coarse-grained occlusion. I have to run some errands until early afternoon today (going to get ultrasound #2 of our baby :o) but will be back later. |
Almost! :) I load a downsampled (to 50mpp, with
I got a bit annoyed with meshpy, dmesh etc since they are really slow at high-resolution. Open3d is very fast: 73 seconds for 6924777 faces of a 60mpp mesh of the lsp, 9 seconds for 394033 faces of the 250mpp mesh (I haven't checked in detail how good/regular/water-tight the mesh is, but the result usually shows no artifacts nor imperfections).
That's super cool, enjoy and congrats! (great that they let you in, too, in these covid times) :) |
In any case, I would rather import the GTiff/raster to an xarray, e.g.,
instead of several simple .npy, to preserve useful information such as reference frames, etc... which one later needs for coordinate transformations. |
To avoid any interpolation from occurring I simply do this to generate a mesh:
where I might create (x,y) from a raster. It's surprisingly fast (used it for millions of points), avoids loss of accuracy due to interpolation, and works for all quad-tree situations. We want to triangulate an existing point cloud, not create any other type of mesh. |
Yes, that was the other option: using
or
and then do what @nschorgh suggests. For my applications, I still think my approach is more flexible, but maybe here we can cut part of Mike's GTiff and process it in this way w/o importing many libraries. |
I am not convinced by either of your approaches to meshing. Sorry... :-( Please see the script make_mesh_sfp.py in the gerlache branch. It creates a 40K triangle mesh of the region [-25, 25] x [-25, 25] (stereo x/y) in 42 seconds on my computer. This uses Shewchuk's Triangle via MeshPy with a refinement function This seems like a reasonable amount of time to wait for a triangle mesh since we will spend much more time on the backend actually solving the problem. Maybe it would make sense to compare the amount of time it takes to construct the mesh with the compressed FF matrix assembly time. Some details:
Here is a screenshot of the resulting mesh:
Thanks. We are very excited. :-) |
Not a problem, and whatever you like to use is clearly fine (and usually a better idea). :) Still, for what matters, I think steps 1 and 2 should be improved. |
No problem! Of course if your methods end up being better, we should use those instead! I am not familiar with xarray, rasterio, or Open3D. It takes me a while to pick up new libraries, so I like to stick with what I know if I can. Sorry, some inertia on my part... I'd like to understand your objection to using interpolation a bit better. I made some plots using a script I just pushed: The first plot is just the DEM for a small 100x100 patch. The second two plots show the first differences along each axis in centimeters. The third plot shows the histograms of the first and second differences, with the DEM's error bars (as I understand them) included. From the DEM's website, "this 87-90 deg mosaic product has a typical RMS residual of 30-55 cm with the individual fields". When I look at these plots, my instinct is: "Yes, it's 100% fine to use linearly interpolation to access subgrid values at the original grid resolution. You will incur some error but it is very unlikely to exceed the existing DEM error due to the regularity of the field being sampled. Using anything higher order than linear interpolation is a waste because DEM error dominates and the surface itself is not meaningfully smooth (but it is continuous)." On the other hand, if you first upsample the grid (say, by a factor of 10), then linearly interpolate to access subgrid values, you will certainly incur an error much larger than the stated 30-55 cm. Of course, this will also happen at non-vertex points if you generate a very coarse mesh. But by linearly interpolating at the original 5 mpp resolution, you can at least have some confidence in the vertices, which means that if you refine, you can expect the generated triangle mesh to conform to the DEM if you go fine enough. If you linearly interpolate from a subgrid this will not happen. If you cubically interpolate from a subgrid, this also is unlikely to happen because the DEM itself is not smooth and hence higher order polynomial interpolation is a bit off the mark. So in my mind it makes sense to just access the (very high quality!) DEM directly and use linear interpolation to look up subgrid values. This is cheap, easy, and straightforward. |
Here is a couple more screenshots showing what I'm after: This uses a refinement function which uses a constant target triangle area "inside" Gerlache, and then uses a linear ramp outside of the crater. This is a mesh with 10k faces (20s to make). Probably there's a smarter choice of refinement function, but this gets the idea across. This is all one triangle mesh. Only a very small number of triangles are used to extend the mesh for occlusion. Another one with 22k faces. Took 59s to make. |
Wow, this looks great, thanks. And if there aren't irregularities at the boundary between the "inner" and "outer" region, it should work well, too (I always had trouble and artifacts appearing because of imperfections at the boundaries, when I tried a similar approach... but probably it was some mistake on my side)! Also, indeed your plots seem to show that there is no problem saving an interpolated version of the data. |
@sampotter Lots of labor often goes into the construction of DEMs. Construction of this particular DEM was a paper by itself. Some spend many hours constructing DEMs from shadows and altimeter foot prints. For one project I'm involved in, someone will spend 1400 hours to improve the DEMs of a handful of crates on Ceres. So, DEM points can contain precious information. To demonstrate the computational power of a method this doesn't matter, but if one wants accurate shadows and temperatures, giving up original grid points would be an unwanted and unnecessary loss of information. |
I agree with that. My point is that in terms of dulling a ridge, linearly interpolating the DEM at the original resolution (5 mpp) is much less likely to dull a ridge than linearly interpolating a downsampled DEM (50 mpp). I also don't think there is any reason to use higher order interpolation at the 5 mpp resolution because of noise. If necessary a list of vertices to fix in the triangulation can be inserted using MeshPy, I think. Probably not worth including for the current paper. |
OK, I put together the start of a simple "spoofed" time-dependent example at Gerlache. Here's a movie of the steady state temperature for some fake sun positions: movie.mp4This is generated using try_time_dependent_problem.py in the gerlache branch. Right now I'm using fake sun positions and computing the steady state temperature for the compressed form factor matrix. To-do list:
For the last point, we will need to come up with a list of ways we can compare. One simple thing to check would be the RMS error vs time between the true FF matrix and the compressed FF matrix for different z values in the subsurface heat conduction. This will let us check how much drift over time is introduced by using the compressed FF matrix. |
Is it normal to get this issue when running
|
The start of the Gerlache crater example now uses I just picked any old physical parameters I could steal from other scripts to get this to work (mostly from examples-old/lsp_illuminate.py). I would like to massage this script into something that will generate the data we need for the paper. I am going to ignore whether this model makes any sense for now and move on to finishing the setup of the example:
At the same time we should:
Here are some videos from layers 0 through 3. Skin depths are 0.1 mm apart. Color range is from 99 K to 128 K. layer0.mp4layer1.mp4layer2.mp4layer3.mp4 |
I added a new function |
I would do the error/convergence test for de Gerlache with the equilibrium temperature - that's much faster and to the point. |
Are those at equilibrium? Probably not, and do we want to comment on getting there, in the paper? |
No, definitely not at equilibrium. Getting to equilibrium is outside the scope of the paper.
I'll make a note to do these, too. Since the claim is that we can use the compressed FF matrix to solve the time-dependent problem more quickly, I think it would be useful to try to get a rough idea of what the error incurred by doing so is. I think it should be quite small. |
With equilibrium I simply mean epsilonsigmaT^4 = B (the surface temperature is in equilibrium with the incoming radiance, no subsurface heat flux). Convergence with spatial resolution. |
Yes of course --- we are talking about two different things. You were suggesting plots made w/o subsurface flux and Stefano was just asking whether the simulation had run long enough for the temperature to reach equilibrium in the videos I attached. |
a2504bc should address using real sun positions from SPICE when this option is True (it's not yet thoroughly tested, and feel free to change things you don't like, but the results look reasonable and if vertices are given in cartesian coordinates, it should be fine) |
Here's a quick error test without subsurface heat conduction... Need to figure out what the most sensible way to do this is. To make these plots, I sampled the temperature values on a regular grid by setting up an orthographic camera pointing at the crater, and traced a ray for each pixel. I then look up the temperature value based on which triangle was hit. I am not sure this is the best way to do this, but this is the first idea I had. Test 1 Here is The same thing but with In both cases, the max % error is set to 10 on the colorbar, so actually there is quite an improvement here when using a lower tolerance. Test 2 The same as the first plot from Test 1, except that for the compressed FF matrix we instead set the inner area to 0.75, so that a different mesh is used (again, % error clamped to 10%): You can see wherever there is a ridge, there is quite a lot of error in the solution. This is understandable. On the ridge, the orientation of the triangles will change significantly, which alters the solution there dramatically. Meshes for two different finenesses only roughly agree in space. Comparing solutions on two different meshes isn't straightforward, I don't think. Even if you use a mesh taken from a regular grid with different grid spacing, you will have the same problem. To address this, we should probably use curved elements and higher-order basis functions (e.g., linear at least) instead of flat triangles + constant basis functions. This is a substantive amount of work which could be addressed in another paper, I think. In such a paper we would also want to decide what the correct way to "conservatively interpolate" solutions between meshes is. I think it makes sense to include Test 1 but not Test 2. That is, we should check the agreement between the true FF matrix and compressed FF matrix for different tolerances, but doing a convergence study at this point may be premature. What do you guys think? |
There are two separate orthogonal questions here:
The answer to (2) is "yes" based on numerical evidence we are able to provide. The answer to (1) is outside the scope of the paper, IMO. Improving on (1) is a separate line of research. In general, if the orders of accuracy of the ingredients in (1) are improved, our approach is still relevant and useful and should provide an affirmative answer to (2). IMO, in this way, the focus of the current paper should be as a proof of concept demonstrating that the answer to (2) is "yes". |
This is perfect, thanks! |
I would agree that our goal is to assess the performance of the FF compression (1), rather than of different kinds/choices of meshing (that would be 2, I think, and I agree that's not in the scope of this paper). |
Yes, we can do this, and we will do this. We could make a plot of this value for varying compression tolerance and mesh size (there are some plots in the Overleaf doc already along these lines); or maybe better, just a table of values, since the paper is going to become overburdened with plots. The plots above in the case where the meshes are the same size and T is directly comparable are just for visualization purposes (unlike the third plot, where I try to compare directly the fields obtained from different meshes). I think it's helpful to get some visual sense of the error; where it is largest, whether there is some spatial correlation, etc. Does that answer your question? |
Mmm, not really ... what I meant is that we have T(FF,x,y,z,t) and T(cFF,x,y,z,t) and we can compare and plot those, with associated coordinates to get the same result that you plotted above (I think). What you did is ok, btw, but since you were wondering if there were "simpler" alternatives... |
How would you go about making that plot? |
Erase this, I've been messing up examples... different config, sorry... let me think a bit more about this. We are starting from a mesh here, not obvious how to consistently get back to a raster (e.g., https://github.com/jeremybutlermaptek/mesh_to_geotiff). Ok, got the issue, pardon. |
Also, are your plots in stereo projection, or what are the axes coordinates? It would be good to have those in stereo, for the paper (either by keeping the mesh itself in stereo, and just converting to cartesian for the illumination computations - or else by reprojecting before plotting). |
This has a clear answer. In our discretization of the radiosity integral equation, we use constant basis functions. So, we think of
The axes currently are (x, y) Cartesian. I set up a grid of rays beneath the crater with ray directions all equal to (0, 0, 1). The origin of each ray is of the form Let me think about how best to do a plot like this in stereo projection. |
OK, plotting in stereo was actually pretty easy: Simple way to do this:
For this plot I just made a |
Yes, that's the "inverse" of what I was suggesting or usually doing (I usually keep V in stereo, and convert them to cartesian just for the illumination computation, but it works all the same): one just associates different vertices coordinates to the same indexes (and hence facets). |
You can think of associating T to the center of each triangle if you like. I find it helpful to think of T as a piecewise constant function defined on the facets of a piecewise linear surface in 3D. The two can be converted back and forth in the obvious way.
There are a few things. I find the GeoTIFFs hard to work with because I haven't learned the format. They slow me down a lot. I am also not really sure what they bring to the table for the purposes of the current paper. If we want to think of T as a list of point values defined at the centroid of each triangle, then we can just work with these values directly (i.e. with the numpy array that is the result of e.g. compute_steady_state_temp). Indeed, if we always use the same mesh, no interpolation is necessary. And the thing I'm doing to make the plots above isn't necessary, either. What I'm doing above does let us compare between different meshes. I am not at all sure this is a sensible thing to do. I think we should probably just leave it alone for now. That said, I think it is helpful to make image plots like the above to give a qualitative sense of the solution. I like this approach because it computes a separate value for each pixel in an image. If we treat T as a bunch of point values, it isn't clear to me how we make such a plot. One way to go would be to make a scatter plot, but I think this is a bit confusing and unpleasant to look at. The pictures I made above are clear: they are just the result of taking a picture of T (in its "piecewise constant function on triangle mesh in 3D" mode). Please let me know if I'm missing something in your previous response. |
Can you explain the plot a bit and how it differs from the plot I'm making? I need some more context. Why is there "shot noise" (white pixels) throughout? |
Sure, sorry. It's quite close to what you are doing (which is good) but based on those T(t,z,x,y) arrays I was mentioning, which might make it easier to compare and compute differences between different outputs and configurations. Those arrays can be exported to GTiffs (if useful for storing, avoiding pickle issues), but also simply "used" as arrays within the script.
Those are PSRs which I intentionally removed from the whole computation. They should be like this, not an artifact. :) |
OK. How do you make a raster for that plot? I'm still confused about that part. |
ok, let me try with an output of gerlache and see if/how it works, and we can better compare and discuss. :) |
Nope, more difficult than I expected: it would need regridding and such. A pity, it would have been convenient for I/O and computations |
OK, no problem. We have a way of making the necessary plots for now which is enough for the current paper. Definitely the GeoTIFF format seems useful for the future. |
@sampotter I think there is an issue in the "stereo vertices" of your mesh. For example, when you compare the stereographic coordinates of Once the transformation above is applied, then the illumination/temperature maps look fine (here below an overlap of Tmax in K with the expected PSRs, in yellow - Tmax<160K areas are well aligned with the yellow areas), which also seems to exclude a major problem with the cartesian coordinates (else illumination will be off, etc...) |
I would say it is quite possible I did something silly when reading the data in from Mike's GeoTIFF file. I am not sure exactly where the problem is, but I do recall having to screw around with the axes in the image plots to make them match. |
IIRC, Mike's GeoTIFF file contains (buried within it somewhere), a packed 1D array of 64-bit floats corresponding to x coordinates, ditto for y coordinates, and then a packed 2D array of z values. (Maybe the x and y floats are also 2D arrays, but pretty sure it was 1D...) I am not sure how the 2D array of z values is ordered. Should be either row major or column major. If we can figure that out, I should easily be able to make my getz function consistent with it. Hopefully that will fix the bug. |
(I just noticed that xc, yc are not only switched w.r.t. the usual stereo coordinates, but rather |
By using
you should be able to retrieve those coordinates. |
@steo85it
What has been done with this so far? I would like to push this along
The text was updated successfully, but these errors were encountered: