Replies: 1 comment 4 replies
-
There is the time response from the data source then the time to Depending on what you want, warmed up cache will be performant after it has filled the memory with tiles. If you're using a database, make sure your db connections are high (sized for the service you are using) make sure you have enough RAM set in the config, otherwise you might be building tiles and removing cached ones regularly. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This evening, I created a PostGIS table containing a vector layer with about ~380k geometries that I thought would result in serious delay as Martin served vector tiles. To my surprise, it was fairly performant but still falls short of the responsiveness I'm looking for. For example, when I first load the layer in MapLibre at a lower resolution so that I can visualize all of CONUS, I'm waiting about 10 seconds for the entire layer to render after starting Martin. My goal is to get this down to less than 200 ms. The delay of 10 seconds from a fresh start of Martin is on my 2020 MacBook Pro. I assume the number of available cores on the CPU is likely the most important variable in terms of performance? If so, I'm wondering if I can hit my target while running Martin on a much larger server in the cloud.
For context, I'm exploring the PostGIS route here as opposed to using pre-rendered vector tiles since I would next like to explore rendering tiles for subsets of this dataset. I'd like to mask this layer by state or watershed boundaries. The web application I'm building will allow the user to define and update a region of interest at will.
Beta Was this translation helpful? Give feedback.
All reactions