Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time-dependent problem at de Gerlache crater #55

Open
sampotter opened this issue Jan 18, 2022 · 56 comments
Open

Time-dependent problem at de Gerlache crater #55

sampotter opened this issue Jan 18, 2022 · 56 comments
Labels

Comments

@sampotter
Copy link
Owner

@steo85it

What has been done with this so far? I would like to push this along

@sampotter
Copy link
Owner Author

Added start of example to ./examples/gerlache in a156047.

This will download Mike's DEM from PGDA (here) and convert it to a .npy file which can be easily memory-mapped.

Next to do: make a Delaunay mesh of de Gerlache.

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

I'm running these tests on my side, making meshes at different resolutions, etc ... here is a first plot at 500m/px for max values over 1 month (it seems that eps is not optimal for this resolution, and I had colorbars optimized for Mercury ^^). Else it seems to work (also including occultations from the outer topography).
immagine

I'm using https://github.com/steo85it/raster2mesh to generate the meshes, I can check if that can be easily called from your example.
Else, it's great that you set up a more structured example/test: we should run the example at different resolutions and compare results (which ones? fluxes? T?) to some ground-truth (non-compressed FF). Also, I should update raster2mesh to get (xmin,xmax,ymin,ymax) as input instead of selecting them manually on a plot (it's useful, but not great to repeat tests consistently).

@sampotter
Copy link
Owner Author

Gotcha. I like the look of these plots, they are interesting. How big are these meshes? And where are the scripts you're currently using? It will be helpful if you can share the code you use to call SPICE to generate sun positions.

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

I run it with this script from the "applications repo" I shared some time ago.
I have to figure out how to unpack this (and especially the large proc_objfil.py which it calls) and include it in a structure such as the one you set up in the ./examples/gerlache (which is quite useful to have as a basis to do it).

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

About the code for SPICE, these lines should be useful. These could be useful, too. (Btw, I made a few adjustments to paths, etc... today. I'll push them if useful, but it doesn't change much.) Also, if you have trouble understanding what I'm doing in that mess, feel free to ask (even though it would maybe be easier for me to unpack it).

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

How big are these meshes?

This one was a small test. Barely 2500 triangles with a resolution of 500 mpp. Here is another one at 250 mpp (87000 triangles), but I should start to save some statistics (also, there are some imperfections, not sure if related to eps, embree vs CGAL, or what). Let me check if it's easy to integrate things in your cleaner structure.
immagine

Also, I noticed that I get in trouble loading point-clouds from Mike's 5mpp DEM (even before meshing my selection, at least on my laptop) ... it's probably best using a downsampled version of it to 20 or 50 mpp (anyway, we won't be able to use meshes down to 5mpp for a whole crater). I'll make one with gdal and load it.

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

For a start, I'd replace the content of convert_geotiff_to_npy.py and make_mesh.py by these lines (in one or 2 files, whatever is clearer).
I'll do it after dinner, if that's ok for you (we can simplify by skipping the outer/inner mesh, if we don't plan to focus on occultations from far away topography - not sure that's relevant for the paper - and also set the coordinates to cut as input parameters). Then I'll be happy to have some help cleaning it up a bit. ;)

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

See updates in gerlache branch in github/python-flux for the steps you mentioned (generate mesh - resolution is argument - and convert to cartesian).
Next steps (besides cleaning): generate FF and illuminate using tools from other examples.

@sampotter
Copy link
Owner Author

I see the imperfections. Since there are speckles in the first plot, I guess it has to do with using Embree?

So that I understand: to make your mesh, you load Mike's DEM as a point cloud and then use a tool to convert a point cloud into a 3D mesh?

I was planning on using meshpy with a user refinement function which checks the area of each 3D triangle. This way it should be possible to use one threshold inside an ROI containing the crater, and another outside for coarse-grained occlusion.

I have to run some errands until early afternoon today (going to get ultrasound #2 of our baby :o) but will be back later.

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

So that I understand: to make your mesh, you load Mike's DEM as a point cloud and then use a tool to convert a point cloud into a 3D mesh?

Almost! :) I load a downsampled (to 50mpp, with gdal_translate ldem_87s_5mpp.tif -tr 50 50 ldem_87s_50mpp.tif) version of Mike's DEM, cut out the part that I need (interactively or not), indeed convert it to a point-cloud, then use Open3d (with Poisson's method) to get the mesh at whatever resolution (>=50mpp, in this case).

I was planning on using meshpy with a user refinement function which checks the area of each 3D triangle. This way it should be possible to use one threshold inside an ROI containing the crater, and another outside for coarse-grained occlusion.

I got a bit annoyed with meshpy, dmesh etc since they are really slow at high-resolution. Open3d is very fast: 73 seconds for 6924777 faces of a 60mpp mesh of the lsp, 9 seconds for 394033 faces of the 250mpp mesh (I haven't checked in detail how good/regular/water-tight the mesh is, but the result usually shows no artifacts nor imperfections).
Also, you might get in trouble if you do what you plan to for the outer mesh, because of "holes" at the transitions between outer and inner mesh. I solved it (after a while of trying different solutions, including the one you mentioned) by meshing the whole DEM at the target resolution (as it's fast), then cutting out the inner DEM (it's in the code I linked yesterday, I didn't copy it to the example because I don't think these occlusions are really relevant for the paper).

I have to run some errands until early afternoon today (going to get ultrasound #2 of our baby :o) but will be back later.

That's super cool, enjoy and congrats! (great that they let you in, too, in these covid times) :)

@steo85it
Copy link
Collaborator

In any case, I would rather import the GTiff/raster to an xarray, e.g.,

import xarray as xr
ds=xr.open_dataset(raster_path, engine="rasterio")

instead of several simple .npy, to preserve useful information such as reference frames, etc... which one later needs for coordinate transformations.

@nschorgh
Copy link
Collaborator

To avoid any interpolation from occurring I simply do this to generate a mesh:

    points_mesh = a[:,:2]   # mesh based on (x,y)  
    triangulation = scipy.spatial.Delaunay(points_mesh)  
    V, F = triangulation.points, triangulation.simplices
    V = np.row_stack([V.T, a[:,2]]).T   # add z to vertex coordinates

where I might create (x,y) from a raster. It's surprisingly fast (used it for millions of points), avoids loss of accuracy due to interpolation, and works for all quad-tree situations. We want to triangulate an existing point cloud, not create any other type of mesh.
gdal_translate is presumably good at downsampling a raster.

@steo85it
Copy link
Collaborator

steo85it commented Jan 19, 2022

Yes, that was the other option: using gdal libraries to read and downsample the mesh within the code

from osgeo import gdal
ds = gdal.Open('input.tif')
ds = gdal.Translate('output.tif', ds, xRes, yRes)

or

import xarray as xr
rds = xr.open_dataset(raster_path, engine="rasterio")
downsampled = rds.rio.reproject(rds.rio.crs, resolution=(500.,500.),resampling=Resampling.cubic_spline)

and then do what @nschorgh suggests. For my applications, I still think my approach is more flexible, but maybe here we can cut part of Mike's GTiff and process it in this way w/o importing many libraries.
I can re-implement it in this way, if you prefer, I'm anyway doing something similar for another project.

@sampotter
Copy link
Owner Author

I am not convinced by either of your approaches to meshing. Sorry... :-(

Please see the script make_mesh_sfp.py in the gerlache branch. It creates a 40K triangle mesh of the region [-25, 25] x [-25, 25] (stereo x/y) in 42 seconds on my computer. This uses Shewchuk's Triangle via MeshPy with a refinement function should_refine (max area in km^2 of the 3D triangle).

This seems like a reasonable amount of time to wait for a triangle mesh since we will spend much more time on the backend actually solving the problem. Maybe it would make sense to compare the amount of time it takes to construct the mesh with the compressed FF matrix assembly time.

Some details:

  1. this loads the original 5 mpp DEM as a memory-mapped file so the whole DEM doesn't need to be loaded to memory
  2. it uses linearly interpolation to look up subpixel values from the original DEM
  3. it builds the Delaunay mesh in the stereographic x/y plane
  4. for refinement, it maps each stereographic triangle to Cartesian coordinates and computes the area

Here is a screenshot of the resulting mesh:

image

That's super cool, enjoy and congrats! (great that they let you in, too, in these covid times) :)

Thanks. We are very excited. :-)

@steo85it
Copy link
Collaborator

steo85it commented Jan 20, 2022

Not a problem, and whatever you like to use is clearly fine (and usually a better idea). :) Still, for what matters, I think steps 1 and 2 should be improved.
I would definitely use the xarray package to load the map, clip it, resample it (we said that we don't want interpolation, and I guess even less a linear interpolation) and eventually reproject it. It's optimized for that and it would make the code cleaner (and it's a really basic package for mapping and such + it has some really cool options for multidimensional datasets analysis and visualization).
Once you have the DEM portion you want in memory (steps 1-2), I'm fine with meshing it up as you are doing.
(I'm merging a quick example to discuss)

@sampotter
Copy link
Owner Author

sampotter commented Jan 20, 2022

No problem! Of course if your methods end up being better, we should use those instead! I am not familiar with xarray, rasterio, or Open3D. It takes me a while to pick up new libraries, so I like to stick with what I know if I can. Sorry, some inertia on my part...

I'd like to understand your objection to using interpolation a bit better. I made some plots using a script I just pushed:

image
image

The first plot is just the DEM for a small 100x100 patch. The second two plots show the first differences along each axis in centimeters. The third plot shows the histograms of the first and second differences, with the DEM's error bars (as I understand them) included. From the DEM's website, "this 87-90 deg mosaic product has a typical RMS residual of 30-55 cm with the individual fields".

When I look at these plots, my instinct is: "Yes, it's 100% fine to use linearly interpolation to access subgrid values at the original grid resolution. You will incur some error but it is very unlikely to exceed the existing DEM error due to the regularity of the field being sampled. Using anything higher order than linear interpolation is a waste because DEM error dominates and the surface itself is not meaningfully smooth (but it is continuous)."

On the other hand, if you first upsample the grid (say, by a factor of 10), then linearly interpolate to access subgrid values, you will certainly incur an error much larger than the stated 30-55 cm. Of course, this will also happen at non-vertex points if you generate a very coarse mesh. But by linearly interpolating at the original 5 mpp resolution, you can at least have some confidence in the vertices, which means that if you refine, you can expect the generated triangle mesh to conform to the DEM if you go fine enough. If you linearly interpolate from a subgrid this will not happen. If you cubically interpolate from a subgrid, this also is unlikely to happen because the DEM itself is not smooth and hence higher order polynomial interpolation is a bit off the mark.

So in my mind it makes sense to just access the (very high quality!) DEM directly and use linear interpolation to look up subgrid values. This is cheap, easy, and straightforward.

@sampotter
Copy link
Owner Author

sampotter commented Jan 20, 2022

Here is a couple more screenshots showing what I'm after:

image
image

This uses a refinement function which uses a constant target triangle area "inside" Gerlache, and then uses a linear ramp outside of the crater. This is a mesh with 10k faces (20s to make). Probably there's a smarter choice of refinement function, but this gets the idea across.

This is all one triangle mesh. Only a very small number of triangles are used to extend the mesh for occlusion.

Another one with 22k faces. Took 59s to make.

image

@steo85it
Copy link
Collaborator

steo85it commented Jan 20, 2022

Wow, this looks great, thanks. And if there aren't irregularities at the boundary between the "inner" and "outer" region, it should work well, too (I always had trouble and artifacts appearing because of imperfections at the boundaries, when I tried a similar approach... but probably it was some mistake on my side)! Also, indeed your plots seem to show that there is no problem saving an interpolated version of the data.
Ok, if everything already works fine w/o importing other libraries or playing too much with reference frames, I'm also fine with it (especially since we are looking at a specific single case).

@sampotter sampotter reopened this Jan 20, 2022
@nschorgh
Copy link
Collaborator

@sampotter
Linear interpolation will dull a ridge, unless the points on the ridge are kept. For our application that would be bad indeed.

Lots of labor often goes into the construction of DEMs. Construction of this particular DEM was a paper by itself. Some spend many hours constructing DEMs from shadows and altimeter foot prints. For one project I'm involved in, someone will spend 1400 hours to improve the DEMs of a handful of crates on Ceres.

So, DEM points can contain precious information. To demonstrate the computational power of a method this doesn't matter, but if one wants accurate shadows and temperatures, giving up original grid points would be an unwanted and unnecessary loss of information.

@sampotter
Copy link
Owner Author

I agree with that.

My point is that in terms of dulling a ridge, linearly interpolating the DEM at the original resolution (5 mpp) is much less likely to dull a ridge than linearly interpolating a downsampled DEM (50 mpp). I also don't think there is any reason to use higher order interpolation at the 5 mpp resolution because of noise.

If necessary a list of vertices to fix in the triangulation can be inserted using MeshPy, I think. Probably not worth including for the current paper.

@sampotter
Copy link
Owner Author

sampotter commented Jan 21, 2022

OK, I put together the start of a simple "spoofed" time-dependent example at Gerlache. Here's a movie of the steady state temperature for some fake sun positions:

movie.mp4

This is generated using try_time_dependent_problem.py in the gerlache branch.

Right now I'm using fake sun positions and computing the steady state temperature for the compressed form factor matrix.

To-do list:

  • use real sun positions from SPICE (<- I could use help with this)
  • subsurface heat conduction (<- I could use help with this)
  • run for true FF matrix
  • make comparisons

For the last point, we will need to come up with a list of ways we can compare. One simple thing to check would be the RMS error vs time between the true FF matrix and the compressed FF matrix for different z values in the subsurface heat conduction. This will let us check how much drift over time is introduced by using the compressed FF matrix.

@steo85it
Copy link
Collaborator

Is it normal to get this issue when running try_time_dependent_problem.py with the FF.bin produced by make_form_factor_matrices.py?

Traceback (most recent call last):
  File "/home/sberton2/Lavoro/code/python-flux/examples/gerlache/try_time_dependent_problem.py", line 71, in <module>
    hit = shape_model.intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 133, in intersect1
    return self._intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 257, in _intersect1
    return self.aabb.intersect1(x, d)
AttributeError: 'flux.cgal.aabb.AABB' object has no attribute 'intersect1'

@sampotter
Copy link
Owner Author

The start of the Gerlache crater example now uses flux.model.ThermalModel. See try_time_dependent_problem.py.

I just picked any old physical parameters I could steal from other scripts to get this to work (mostly from examples-old/lsp_illuminate.py). I would like to massage this script into something that will generate the data we need for the paper.

I am going to ignore whether this model makes any sense for now and move on to finishing the setup of the example:

  • run this for different FF matrices (true vs compressed w/ varied tol, and for different mesh sizes)
  • do comparisons of the temperatures over time
  • make plots comparing RMS error of T over time
  • make plots comparing T pointwise at some depth far out in time (to get qualitative sense of the error fields---e.g. are they smooth?)
  • also would be cool: make a plot which shows % of time taken by each task (insolation, compute T, ...)

At the same time we should:

  • fix the parameters to make it more realistic
  • include actual sun positions
  • try for a "global" problem such as Vesta

Here are some videos from layers 0 through 3. Skin depths are 0.1 mm apart. Color range is from 99 K to 128 K.

layer0.mp4
layer1.mp4
layer2.mp4
layer3.mp4

@sampotter
Copy link
Owner Author

Is it normal to get this issue when running try_time_dependent_problem.py with the FF.bin produced by make_form_factor_matrices.py?

Traceback (most recent call last):
  File "/home/sberton2/Lavoro/code/python-flux/examples/gerlache/try_time_dependent_problem.py", line 71, in <module>
    hit = shape_model.intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 133, in intersect1
    return self._intersect1(x, d)
  File "/home/sberton2/Lavoro/code/python-flux/src/flux/shape.py", line 257, in _intersect1
    return self.aabb.intersect1(x, d)
AttributeError: 'flux.cgal.aabb.AABB' object has no attribute 'intersect1'

I added a new function intersect to that class... Make sure to pull all changes and rebuild everything. Also double check that the AABB class does indeed have intersect, otherwise something got out of sync...

@nschorgh
Copy link
Collaborator

I would do the error/convergence test for de Gerlache with the equilibrium temperature - that's much faster and to the point.

@steo85it
Copy link
Collaborator

Are those at equilibrium? Probably not, and do we want to comment on getting there, in the paper?

@sampotter
Copy link
Owner Author

No, definitely not at equilibrium. Getting to equilibrium is outside the scope of the paper.

I would do the error/convergence test for de Gerlache with the equilibrium temperature - that's much faster and to the point.

I'll make a note to do these, too.

Since the claim is that we can use the compressed FF matrix to solve the time-dependent problem more quickly, I think it would be useful to try to get a rough idea of what the error incurred by doing so is. I think it should be quite small.

@nschorgh
Copy link
Collaborator

With equilibrium I simply mean epsilonsigmaT^4 = B (the surface temperature is in equilibrium with the incoming radiance, no subsurface heat flux). Convergence with spatial resolution.

@sampotter
Copy link
Owner Author

Yes of course --- we are talking about two different things. You were suggesting plots made w/o subsurface flux and Stefano was just asking whether the simulation had run long enough for the temperature to reach equilibrium in the videos I attached.

@steo85it
Copy link
Collaborator

steo85it commented Jan 26, 2022

a2504bc should address using real sun positions from SPICE when this option is True (it's not yet thoroughly tested, and feel free to change things you don't like, but the results look reasonable and if vertices are given in cartesian coordinates, it should be fine)

@sampotter
Copy link
Owner Author

sampotter commented Jan 26, 2022

Here's a quick error test without subsurface heat conduction... Need to figure out what the most sensible way to do this is.

To make these plots, I sampled the temperature values on a regular grid by setting up an orthographic camera pointing at the crater, and traced a ray for each pixel. I then look up the temperature value based on which triangle was hit. I am not sure this is the best way to do this, but this is the first idea I had.


Test 1

Here is T with the max inner area set to 0.4 and the max outer area set to 3.0, compared between the compressed FF matrix with tol = 1e-2 and the true FF matrix (percentage error, i.e. 100*|\hat{T} - T|/|T|):

image

The same thing but with tol = 1e-1 instead of 1e-2:

image

In both cases, the max % error is set to 10 on the colorbar, so actually there is quite an improvement here when using a lower tolerance.


Test 2

The same as the first plot from Test 1, except that for the compressed FF matrix we instead set the inner area to 0.75, so that a different mesh is used (again, % error clamped to 10%):

image

You can see wherever there is a ridge, there is quite a lot of error in the solution. This is understandable. On the ridge, the orientation of the triangles will change significantly, which alters the solution there dramatically. Meshes for two different finenesses only roughly agree in space. Comparing solutions on two different meshes isn't straightforward, I don't think. Even if you use a mesh taken from a regular grid with different grid spacing, you will have the same problem.

To address this, we should probably use curved elements and higher-order basis functions (e.g., linear at least) instead of flat triangles + constant basis functions. This is a substantive amount of work which could be addressed in another paper, I think. In such a paper we would also want to decide what the correct way to "conservatively interpolate" solutions between meshes is.


I think it makes sense to include Test 1 but not Test 2. That is, we should check the agreement between the true FF matrix and compressed FF matrix for different tolerances, but doing a convergence study at this point may be premature.

What do you guys think?

@sampotter
Copy link
Owner Author

sampotter commented Jan 26, 2022

There are two separate orthogonal questions here:

  1. Whether triangles + constant basis functions + our view factor approximation (midpoint rule) is a reasonable discretization of the radiosity integral equation for the thermal modeling task at hand
  2. Whether our algorithm accurately approximates the true system matrix for the discretization in (1)

The answer to (2) is "yes" based on numerical evidence we are able to provide.

The answer to (1) is outside the scope of the paper, IMO.


Improving on (1) is a separate line of research.

In general, if the orders of accuracy of the ingredients in (1) are improved, our approach is still relevant and useful and should provide an affirmative answer to (2).

IMO, in this way, the focus of the current paper should be as a proof of concept demonstrating that the answer to (2) is "yes".

@sampotter
Copy link
Owner Author

a2504bc should address using real sun positions from SPICE when this option is True (it's not yet thoroughly tested, and feel free to change things you don't like, but the results look reasonable and if vertices are given in cartesian coordinates, it should be fine)

This is perfect, thanks!

@steo85it
Copy link
Collaborator

steo85it commented Jan 26, 2022

I think it makes sense to include Test 1 but not Test 2. That is, we should check the agreement between the true FF matrix and compressed FF matrix for different tolerances, but doing a convergence study at this point may be premature.

I would agree that our goal is to assess the performance of the FF compression (1), rather than of different kinds/choices of meshing (that would be 2, I think, and I agree that's not in the scope of this paper).
Also, why can't one "simply" compare the T(FF) and T(compressed FF) by getting the norm of the differences of the resulting arrays? We should have them for both as output of the model, right? I'm probably missing something.

@sampotter
Copy link
Owner Author

sampotter commented Jan 26, 2022

Also, why can't one "simply" compare the T(FF) and T(compressed FF) by getting the norm of the differences of the resulting arrays? We should have them for both as output of the model, right? I'm probably missing something.

Yes, we can do this, and we will do this. We could make a plot of this value for varying compression tolerance and mesh size (there are some plots in the Overleaf doc already along these lines); or maybe better, just a table of values, since the paper is going to become overburdened with plots.

The plots above in the case where the meshes are the same size and T is directly comparable are just for visualization purposes (unlike the third plot, where I try to compare directly the fields obtained from different meshes). I think it's helpful to get some visual sense of the error; where it is largest, whether there is some spatial correlation, etc.

Does that answer your question?

@steo85it
Copy link
Collaborator

steo85it commented Jan 26, 2022

Mmm, not really ... what I meant is that we have T(FF,x,y,z,t) and T(cFF,x,y,z,t) and we can compare and plot those, with associated coordinates to get the same result that you plotted above (I think). What you did is ok, btw, but since you were wondering if there were "simpler" alternatives...

@sampotter
Copy link
Owner Author

How would you go about making that plot?

@steo85it
Copy link
Collaborator

steo85it commented Jan 26, 2022

Erase this, I've been messing up examples... different config, sorry... let me think a bit more about this. We are starting from a mesh here, not obvious how to consistently get back to a raster (e.g., https://github.com/jeremybutlermaptek/mesh_to_geotiff). Ok, got the issue, pardon.
On the other hand, we should be able to convert a point cloud x,y,elev --> x,y,T and then make the comparisons we need on that level. But ok, the issue is to which x,y do we associate T (vertices? centers of facets?).

@steo85it
Copy link
Collaborator

Also, are your plots in stereo projection, or what are the axes coordinates? It would be good to have those in stereo, for the paper (either by keeping the mesh itself in stereo, and just converting to cartesian for the illumination computations - or else by reprojecting before plotting).

@sampotter
Copy link
Owner Author

sampotter commented Jan 27, 2022

But ok, the issue is to which x,y do we associate T (vertices? centers of facets?).

This has a clear answer. In our discretization of the radiosity integral equation, we use constant basis functions. So, we think of T as being a piecewise constant function defined over the triangulation (ditto the insolation and the radiance). This is not well-defined over edges and vertices (which incident face should we take the T value from?). The way I resolve this is simple: just use whatever triangle the raytracer tells me I hit. In practice, CGAL never says, "oh, you hit an edge---which triangle do you want?". I don't know how it resolves this ambiguity and I don't really care.

Also, are your plots in stereo projection, or what are the axes coordinates?

The axes currently are (x, y) Cartesian. I set up a grid of rays beneath the crater with ray directions all equal to (0, 0, 1). The origin of each ray is of the form (x + h*i, y + h*j, z), for fixed x, y, and z values. (To avoid confusion---by "beneath" I mean outside of the moon, point towards the north pole. So, "beneath" as if I were looking at the moon with the north pole oriented upward, i.e. the ray direction points from the south pole to the north pole.)

Let me think about how best to do a plot like this in stereo projection.

@sampotter
Copy link
Owner Author

sampotter commented Jan 27, 2022

OK, plotting in stereo was actually pretty easy:

image

Simple way to do this:

  • proceed as before with everything in Cartesian
  • when making the meshes initially, also save out the (x, y) stereo coordinates of the vertices of the mesh (so you have two sets of vertices: stereo & Cart)
  • use the stereo vertices and the face indices to get a triangulation in stereo coordinates

For this plot I just made a CgalTrimeshShapeModel to look up hit points for a grid of rays. But there's probably some scipy or matplotlib convenience function that will allow you to make this same plot...

@steo85it
Copy link
Collaborator

steo85it commented Jan 27, 2022

Yes, that's the "inverse" of what I was suggesting or usually doing (I usually keep V in stereo, and convert them to cartesian just for the illumination computation, but it works all the same): one just associates different vertices coordinates to the same indexes (and hence facets).
Do we then get the same result if we associate T to the center P of each triangle? In that case, we can easily do what I was suggesting yesterday (producing GTiffs and then read and compare them with whatever tools, or directly have raster xyT in memory and make computations among them, differences or normed differences or whatever we want). If we always use the same mesh, we don't even need to interpolate, else we might need to. Or do you think that with different meshes (do we really need to compare among them? Or just results of different FF based on the same mesh) your approach would still be better/faster/more accurate?

@sampotter
Copy link
Owner Author

sampotter commented Jan 27, 2022

Do we then get the same result if we associate T to the center P of each triangle?

You can think of associating T to the center of each triangle if you like.

I find it helpful to think of T as a piecewise constant function defined on the facets of a piecewise linear surface in 3D.

The two can be converted back and forth in the obvious way.

In that case, we can easily do what I was suggesting yesterday (producing GTiffs and then read and compare them with whatever tools, or directly have raster xyT in memory and make computations among them, differences or normed differences or whatever we want). If we always use the same mesh, we don't even need to interpolate, else we might need to. Or do you think that with different meshes (do we really need to compare among them? Or just results of different FF based on the same mesh) your approach would still be better/faster/more accurate?

There are a few things.

I find the GeoTIFFs hard to work with because I haven't learned the format. They slow me down a lot. I am also not really sure what they bring to the table for the purposes of the current paper.

If we want to think of T as a list of point values defined at the centroid of each triangle, then we can just work with these values directly (i.e. with the numpy array that is the result of e.g. compute_steady_state_temp). Indeed, if we always use the same mesh, no interpolation is necessary. And the thing I'm doing to make the plots above isn't necessary, either.

What I'm doing above does let us compare between different meshes. I am not at all sure this is a sensible thing to do. I think we should probably just leave it alone for now.

That said, I think it is helpful to make image plots like the above to give a qualitative sense of the solution. I like this approach because it computes a separate value for each pixel in an image. If we treat T as a bunch of point values, it isn't clear to me how we make such a plot. One way to go would be to make a scatter plot, but I think this is a bit confusing and unpleasant to look at. The pictures I made above are clear: they are just the result of taking a picture of T (in its "piecewise constant function on triangle mesh in 3D" mode).

Please let me know if I'm missing something in your previous response.

@steo85it
Copy link
Collaborator

steo85it commented Jan 27, 2022

Great, thanks for the clarifications! Storing stereo coordinates and associated T in arrays (with more or less complexity), we should be able to plot maps such as this one (it shows T differences among epochs/cycles, but it's along the same lines)
immagine
I need to try with the gerlache's output, but it should work, I think ... (the white areas are meant to be there and not an artifact)

@sampotter
Copy link
Owner Author

Can you explain the plot a bit and how it differs from the plot I'm making? I need some more context.

Why is there "shot noise" (white pixels) throughout?

@steo85it
Copy link
Collaborator

Sure, sorry. It's quite close to what you are doing (which is good) but based on those T(t,z,x,y) arrays I was mentioning, which might make it easier to compare and compute differences between different outputs and configurations. Those arrays can be exported to GTiffs (if useful for storing, avoiding pickle issues), but also simply "used" as arrays within the script.
Now, if you are satisfied with your solution, that's perfectly fine (as you pointed out, the results are similar). If not, this might be an alternative approach.

Why is there "shot noise" (white pixels) throughout?

Those are PSRs which I intentionally removed from the whole computation. They should be like this, not an artifact. :)

@sampotter
Copy link
Owner Author

OK. How do you make a raster for that plot? I'm still confused about that part.

@steo85it
Copy link
Collaborator

ok, let me try with an output of gerlache and see if/how it works, and we can better compare and discuss. :)

@steo85it
Copy link
Collaborator

steo85it commented Jan 28, 2022

Nope, more difficult than I expected: it would need regridding and such. A pity, it would have been convenient for I/O and computations

@sampotter
Copy link
Owner Author

OK, no problem. We have a way of making the necessary plots for now which is enough for the current paper. Definitely the GeoTIFF format seems useful for the future.

@steo85it
Copy link
Collaborator

steo85it commented Feb 25, 2022

@sampotter I think there is an issue in the "stereo vertices" of your mesh. For example, when you compare the stereographic coordinates of ldem_87s_50mpp (in meters)
image
with the stereo coordinates (in km) of the deGerlache clip from earlier in the conversation (#55 (comment)):
image
one sees that they fit with each other only if x=y and y=-x. Any idea where the issue could come from? The stereo transformation itself seems to be ok, so it might be in the meshing/raster import step.

Once the transformation above is applied, then the illumination/temperature maps look fine (here below an overlap of Tmax in K with the expected PSRs, in yellow - Tmax<160K areas are well aligned with the yellow areas), which also seems to exclude a major problem with the cartesian coordinates (else illumination will be off, etc...)
image

@sampotter
Copy link
Owner Author

I would say it is quite possible I did something silly when reading the data in from Mike's GeoTIFF file. I am not sure exactly where the problem is, but I do recall having to screw around with the axes in the image plots to make them match.

@steo85it
Copy link
Collaborator

steo85it commented Feb 27, 2022

Yes, there is some issue with the axes. Already the fact that you have to fix the center of the roi for Gerlache at

xc_roi, yc_roi = 0, -45 # km, stereo SP

(while y=0 for this crater in stereo projection) shows that there is an issue. I don't know if we care about it, but I would like to make sure that the cartesian positions are ok (for direct illumination).

I compared with the results of the rioxarray library on the same DEM, and the z are different. We could simply replace the getz function by this one, but your script requests the elevations z(x,y) one by one (instead of once for an array of x and of y), so that calling that "more complex" function is way too expensive (x1000 run-time per each call). I'm trying to use it as comparison to figure out where is the bug (I think in the indexing within getz, but not sure yet).

Can this be related to the (weird) ordering of the np.array indexes (see result of plt.imshow(dem))?
image

@sampotter
Copy link
Owner Author

sampotter commented Feb 27, 2022

IIRC, Mike's GeoTIFF file contains (buried within it somewhere), a packed 1D array of 64-bit floats corresponding to x coordinates, ditto for y coordinates, and then a packed 2D array of z values. (Maybe the x and y floats are also 2D arrays, but pretty sure it was 1D...)

I am not sure how the 2D array of z values is ordered. Should be either row major or column major. If we can figure that out, I should easily be able to make my getz function consistent with it. Hopefully that will fix the bug.

@steo85it
Copy link
Collaborator

(I just noticed that xc, yc are not only switched w.r.t. the usual stereo coordinates, but rather xc=-y and yc=x - just not visible for deGerlache, since y=0)

@steo85it
Copy link
Collaborator

steo85it commented Feb 27, 2022

IIRC, Mike's GeoTIFF file contains (buried within it somewhere), a packed 1D array of 64-bit floats corresponding to x coordinates, ditto for y coordinates, and then a packed 2D array of z values. (Maybe the x and y floats are also 2D arrays, but pretty sure it was 1D...)

By using

import rioxarray as rio
rds = rio.open_rasterio(tif_path)
x_coords = rds.coords['x'].values
y_coords = rds.coords['y'].values

you should be able to retrieve those coordinates.
(also, FYI, dem = rds.data.values[0] which should simplify the pipeline)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants