Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Particle longitudes are wrongly interpolated when crossing the antimeridian #1337

Open
poplarShift opened this issue Jun 27, 2024 · 4 comments

Comments

@poplarShift
Copy link
Contributor

poplarShift commented Jun 27, 2024

When particles cross the antimeridian, they suddenly pop up at roughly (not exactly) at -90, 0, or 90 degrees longitude, see attached image and csv (from one trajectory only).

image
gh-issue.csv

@poplarShift poplarShift changed the title Particles longitudes/latitudes are wrongly interpolated when crossing the antimeridian Particles longitudes are wrongly interpolated when crossing the antimeridian Jun 27, 2024
@poplarShift poplarShift changed the title Particles longitudes are wrongly interpolated when crossing the antimeridian Particle longitudes are wrongly interpolated when crossing the antimeridian Jun 27, 2024
@knutfrode
Copy link
Collaborator

knutfrode commented Jun 27, 2024

Oh! Are you here using the unstructured or regular/generic netCDF reader?
Though this issue might perhaps be independent of reader type.

There are some basic dateline-tests in OpenDrift, but they do apparently not cover this problem:

def test_dateline(self):

If you could make a minimal reproducible example, I can have a look at it.
Ideally a new test of the same style, using synthetic netCDF-file, then we don't need to add more binary files to the repository.

@poplarShift
Copy link
Contributor Author

poplarShift commented Jun 27, 2024

This was with the generic netCDF reader. Note that strangely, particles will only "pop by" those odd longitudes for at most a few timesteps, after which they reappear where they should be. In that sense the bug is not criticial.
But it also means that unless I missed something, the tests you linked would not pick up on this.

I cannot seem to reproduce this in a minimal example, neither with a simple constant reader crossing the antimeridian nor with the original CMEMS product. I did however note that in the original run that produced the weird longitudes, I have been getting quite a few of those WARNING opendrift.readers.basereader.structured:324: Data block from cmems_mod_glo_phy_myint_0.083deg_P1D-m not large enough to cover element positions within timestep... warnings.

To give you an idea what I was trying (note the negative timestep for a backtrajectory):

import copernicusmarine
from datetime import timedelta, datetime

import numpy as np, pandas as pd

from opendrift.readers import reader_netCDF_CF_generic
from opendrift.models.oceandrift import OceanDrift

o = OceanDrift(
    loglevel=30,
    seed=0,
    iomodule="parquet",
) 
dataset_id = "cmems_mod_glo_phy_myint_0.083deg_P1D-m"
ds = copernicusmarine.open_dataset(
    dataset_id=dataset_id,
)
ds["name"] = str(dataset_id)

o.add_reader(reader_netCDF_CF_generic.Reader(ds, name=dataset_id))

# SEEDING
times = pd.date_range("2022-08-02", "2022-08-19", freq="1D")
# N: number of particles, distributed over the depth ranges and seed locations
number = 10

zl, zu = -50, 0
for time in times:
    for seed in [{"lon": 179.8, "lat": 80, "origin_marker": 0}]:
        o.seed_elements(
            z=np.linspace(zl, zu, number),
            radius=1e3,
            lon=[seed["lon"]] * number,
            lat=[seed["lat"]] * number,
            time=[time] * number,
            origin_marker=[seed["origin_marker"]] * number,
        )
o.run(
    **{
        "duration": timedelta(days=2),
        "time_step": -timedelta(minutes=60),
        "time_step_output": timedelta(hours=1),
        "export_variables": [
            "lon",
            "lat",
            "z",
            "age_seconds",
            "origin_marker",
        ],
        "export_buffer_length": 10,
    },
    outfile="demo-copernicus.pq"
)

df = pd.read_parquet("demo-copernicus.pq")
df.plot.scatter(x='time', y='lon')
df[["lon", "lat"]]
``

@knutfrode
Copy link
Collaborator

knutfrode commented Jun 27, 2024

Ok, yes this is a bit strange.
I see that you are using your own parquet-writer, but assumedly this does not modify the positions to be written. If in doubt, you could try to use the regular netCDF reader instead, to see if that solves the problem.

The "Data block too small" warnings also pop up when particles are the edge of a reader, and thus then sound a bit misleading. Anyway, the problem could be related to that, or perhaps to reader methods xy2lonlat / lonlat2xy.
I will need come back to look at this another day.

@poplarShift
Copy link
Contributor Author

poplarShift commented Jun 27, 2024

Yes, my plan was to reproduce the bug, then find out it was the parquet writer all along... If it is though, the issue is more intermittent. Given that the CMEMS product I used is global, the only place the block size issue can have occurred is at the antimeridian, so definitely something to look into (not before the summer though).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants