Requesting suggestions and feedback #838
Replies: 2 comments 5 replies
-
Hi Rohit, Excited to see this. :) In terms of which APIs are the "central" ones, though, I am surprised that Uber would update rider and driver locations roughly once every 4 seconds, which gets you on the order of millions of After that, for surge pricing, The surge system then uses a parallel map system where each layer of the map has a specific cell it it computing for but has access to all of the cells in the region from the prior layer. Because dispatching a driver to a rider has an impact on the availability of drivers for other riders, changes to the state of one H3 cell will have an impact on neighboring cells, with more distant cells having less and less of an impact. So for each cell to be computed, it will use the indexed data for its own cell, as well as neighbors, using either Then on the driver side, a mapping of these surge prices are shown to indicate to drivers where there's a strong demand. Exactly how this is done has varied a lot. At first we used In any case, the rendering of cells by getting the lat, lng boundary to provide to mapping software (that is not H3 aware, at least) is also pretty frequently called. If done naively, it would be called for all visible cells for each viewport area defined, which could be many times a second on viewport panning, so I think The cell validation logic really only needs to be called when ingesting a dataset from a source you don't fully trust. Within Uber's services it was assumed that all H3 indexes provided between them were valid and generated by the same source code, and so fully compatible with each other, and validation checks were not generally performed. H3's vertex mode is much newer, so I'll have to let @nrabinowitz chime in on how "tight" of a loop they're used in the system they were designed for, but at least for the original purpose of H3 at Uber, indexing and "de-indexing" were just as important as grid traversal tooling from a performance perspective. |
Beta Was this translation helpful? Give feedback.
-
As context, I'd say we usually have built benchmarks for things we're testing or iterating on, so not all of the benchmarks we include are what I'd consider a core execution path. I would not include Of our current benchmarks, I would suggest:
These probably cover the most commonly used functions in the library, though |
Beta Was this translation helpful? Give feedback.
-
Hello UberH3 community members,
I am currently working on having UberH3 as one of the potential candidate workloads in the next version of SPEC CPU benchmark and need some feedback about the workload. SPEC is very well established industry consortium that develops performance benchmarks and more details about SPEC CPU can be found here (https://www.spec.org/cpu2017/).
Geospatial Indexing is an interesting domain and therefore UberH3 presents an exciting opportunity for SPEC CPU. For my candidate workload (based off UberH3), I have used a slightly modified version of benchmarkVertex.c, benchmarkIsValid.c and benchmarkGridDiskCells.c (taken from the github location (https://github.com/uber/h3/tree/master/src/apps/benchmarks). These benchmarks were selected to meet certain criteria and requirements specific to the nature of SPEC CPU (I can provide more details on this if necessary).
I would like to get feedback from the community to see what they think of these benchmarks. In particular, I want to make sure that that the underlying APIs used by these benchmarks are considered as one of the central set of APIs and therefore, are representative of the real-world geospatial workload execution code path.
Any insights or suggestions from any of you would be greatly appreciated.
Thanks in advance.
-Rohit
Beta Was this translation helpful? Give feedback.
All reactions