Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about GravityModelling operator #756

Open
rakoczi-h opened this issue Jul 30, 2024 · 2 comments
Open

Question about GravityModelling operator #756

rakoczi-h opened this issue Jul 30, 2024 · 2 comments
Assignees

Comments

@rakoczi-h
Copy link

Problem description

I am attempting to use the pygimli gravimetry package to perform inversion on real gravimetry data. I found that the output from the GravityModelling forward operator yields slightly inconsistent results with my own forward modelling tool, as it appears that the conversation factor between microGal and the units of the output is ~0.15, which is not a familiar number. I guess my question is: could you confirm what units you are using for the input densities (g/cm^3 in your examples) and grid (m?), as well as your output gravity signal?

Your environment


Date: Tue Jul 30 12:48:49 2024 BST

            OS : Darwin
        CPU(s) : 10
       Machine : arm64
  Architecture : 64bit
           RAM : 16.0 GiB
   Environment : Jupyter
   File system : apfs

Python 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:34:54)
[Clang 16.0.6 ]

       pygimli : 1.5.1
        pgcore : 1.5.0
         numpy : 1.26.4
    matplotlib : 3.5.0
         scipy : 1.14.0
          tqdm : 4.66.4
       IPython : 8.25.0
       pyvista : 0.44.0

Steps to reproduce

import pygimli as pg
import numpy as np
from pygimli.physics.gravimetry import GravityModelling
from pygimli.viewer import pv

# Making a grid
dx_m = 0.1
dy_m = 0.1
dz_m = 0.1
x_m = np.arange(-0.5, 0.5+0.05, dx_m)
y_m = np.arange(-0.5, 0.5+0.05, dy_m)
z_m = np.arange(0.0, 1.0+0.05, dz_m)
grid = pg.createGrid(x_m, y_m, z_m)

# Making the survey grid
dx_s = 0.1
dy_s = 0.1
dz_s = 0.1
x_s = np.arange(-0.5, 0.5+0.05, dx_s)
y_s = np.arange(-0.5, 0.5+0.05, dy_s)
xx, yy = np.meshgrid(x_s, y_s)
points = np.column_stack((xx.ravel(), yy.ravel(), -np.ones(np.prod(xx.shape))))

# Forward operator
fop = GravityModelling(grid, points)

# Making the anomaly
rho = 1.0
v = np.zeros((len(z_m)-1, len(y_m)-1, len(x_m)-1))
v[5:9, 4:8, 3:9] = rho  # assuming g/cm^3

# Calculating gravity signal

gz = fop.response(v.ravel())

# The comparison method:
def get_gz_anal(survey_coordinates, limits, rho):
        """
        Analytical forward model based on Jung (1961) and Plouff (1966), review Li (1998). Calculated the vertical gravity at a set of measurement locations from a number of rectangular prisms.
        Parameters
        ----------
            survey_coordinates: array
                [number of measurement locations, (x,y,z)] The locations where the gravity measurements are taken. [units: m]
            limits: array
                [number of prisms, (x,y,z), (min, max)] The array defining the limits of the prisms that we want to see the gravity signature of. This would be 1 lenght, if we are looking for the gravity from a single box based on its parameters. In the voxelised case, the length of this is the number of voxels in the model. [units: m]
            rho: float or array 
                If it is a float, then all the voxels have this same density. If an array, it has to have the same lenght as the first dimension of limits. units[kg/m^3]
        Output
        ------
            Returns the vector of vertical gravity values at the survey_coordinates locations. (microGal)
        """

        shape = (np.shape(limits)[0], np.shape(survey_coordinates)[0])

        x = survey_coordinates[:,0]
        y = survey_coordinates[:,1]
        z = survey_coordinates[:,2]

        xi1 = limits[:,0,0]
        xi2 = limits[:,0,1]
        eta1 = limits[:,1,0]
        eta2 = limits[:,1,1]
        zeta1 = limits[:,2,0]
        zeta2 = limits[:,2,1]

        G = 6.67430E-11
        # Vectorize
        x = np.broadcast_to(x, shape)
        y = np.broadcast_to(y, shape)
        xi1 = np.broadcast_to(xi1[:, np.newaxis], shape)
        xi2 = np.broadcast_to(xi2[:, np.newaxis], shape)
        eta1 = np.broadcast_to(eta1[:, np.newaxis], shape)
        eta2 = np.broadcast_to(eta2[:, np.newaxis], shape)
        zeta1 = np.broadcast_to(zeta1[:, np.newaxis], shape)
        zeta2 = np.broadcast_to(zeta2[:, np.newaxis], shape)
        if isinstance(rho, np.ndarray):
            rho = np.broadcast_to(rho[:, np.newaxis], shape)


        x1 = x - xi1
        x2 = x - xi2
        y1 = y - eta1
        y2 = y - eta2
        z1 = z - zeta1
        z2 = z - zeta2

        r111 = np.sqrt(x1 * x1 + y1 * y1 + z1 * z1)
        r211 = np.sqrt(x2 * x2 + y1 * y1 + z1 * z1)
        r121 = np.sqrt(x1 * x1 + y2 * y2 + z1 * z1)
        r112 = np.sqrt(x1 * x1 + y1 * y1 + z2 * z2)
        r212 = np.sqrt(x2 * x2 + y1 * y1 + z2 * z2)
        r221 = np.sqrt(x2 * x2 + y2 * y2 + z1 * z1)
        r122 = np.sqrt(x1 * x1 + y2 * y2 + z2 * z2)
        r222 = np.sqrt(x2 * x2 + y2 * y2 + z2 * z2)

        u111 = -1
        u211 = 1
        u121 = 1
        u112 = 1
        u212 = -1
        u221 = -1
        u122 = -1
        u222 = 1

        t111 = u111 * (x1 * np.log(y1 + r111) + y1 * np.log(x1 + r111) - z1 * np.arctan((x1 * y1) / (z1 * r111)))
        t211 = u211 * (x2 * np.log(y1 + r211) + y1 * np.log(x2 + r211) - z1 * np.arctan((x2 * y1) / (z1 * r211)))
        t121 = u121 * (x1 * np.log(y2 + r121) + y2 * np.log(x1 + r121) - z1 * np.arctan((x1 * y2) / (z1 * r121)))
        t112 = u112 * (x1 * np.log(y1 + r112) + y1 * np.log(x1 + r112) - z2 * np.arctan((x1 * y1) / (z2 * r112)))
        t212 = u212 * (x2 * np.log(y1 + r212) + y1 * np.log(x2 + r212) - z2 * np.arctan((x2 * y1) / (z2 * r212)))
        t221 = u221 * (x2 * np.log(y2 + r221) + y2 * np.log(x2 + r221) - z1 * np.arctan((x2 * y2) / (z1 * r221)))
        t122 = u122 * (x1 * np.log(y2 + r122) + y2 * np.log(x1 + r122) - z2 * np.arctan((x1 * y2) / (z2 * r122)))
        t222 = u222 * (x2 * np.log(y2 + r222) + y2 * np.log(x2 + r222) - z2 * np.arctan((x2 * y2) / (z2 * r222)))

        # Sum contributions over the voxel dimensions
        return 1E8*G*np.sum(rho * (t111 + t211 + t121 + t112 + t212 + t221 + t122 + t222), axis=0)


limits = np.array([[[-0.2, 0.4],[-0.1, 0.3],[0.5, 0.9]]])
gz_anal = get_gz(points, limits, rho*1000)


# Creating an output
print(np.mean(gz/gz_anal))
print(np.std(gz/gz_anal))

plt.imshow(np.reshape(gz/gz_anal, (11,11)))
plt.colorbar()
plt.show()
0.14983338527557355
1.5583110862032863e-15
Screenshot 2024-07-30 at 13 02 02

question

@halbmy halbmy self-assigned this Jul 31, 2024
@halbmy
Copy link
Contributor

halbmy commented Aug 27, 2024

Sorry for the late answer. Actually, I have never really used the operator except for generating the example with the Li&Oldenburg model. There is indeed a strange factor whose origin needs to be searched for in the Holstein kernel paper. Maybe @mschiffler can help.

@mschiffler
Copy link

I think, there is the gravitational constant missing as factor. Furthermore, I think that the kernel is defined in SI units, thus expect kg/m³ as input data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants