diff --git a/_posts/2020-01-03-decoding-google-street-view-urls.md b/_posts/2020-01-03-decoding-google-street-view-urls.md deleted file mode 100644 index 8d1ded0..0000000 --- a/_posts/2020-01-03-decoding-google-street-view-urls.md +++ /dev/null @@ -1,220 +0,0 @@ ---- -date: 2020-01-03 -title: "Decoding a Google Street View URL" -description: "Analysing the structure of Street View URL's to better understand exposed functionality." -categories: guides -tags: [Google, Street View] -author_staff_member: dgreenwood -image: /assets/images/blog/2020-01-03/google-street-view-url-meta.jpg -featured_image: /assets/images/blog/2020-01-03/google-street-view-url-sm.png -layout: post -published: true -redirect_from: - - /blog/2020/decoding-google-street-view-urls ---- - -**Analysing the structure of Street View URL's to better understand exposed functionality.** - -_Word of warning: this post is accurate at the time of publication (2020-01-03) but may not be on the day your are reading this due to possible changes made by Google. If you do spot an error, [please email us to make us aware](/contact)._ - -Have you ever taken a closer look at a Street View URL? - -Beware, they're very messy... _at first glance..._ - -Let's look at a simplified example using a Google image: - - - -`https://www.google.com/maps/@51.5083663,-0.1114931,2a,75y,260.89h,78.32t/data=!3m7!1e1!3m5!1sjJXYsBpPPqWuvSR5RUaIEQ!2e0!6s%2F%2Fgeo2.ggpht.com%2Fcbk%3Fpanoid%3DjJXYsBpPPqWuvSR5RUaIEQ%26output%3Dthumbnail%26cb_client%3Dmaps_sv.tactile.gps%26thumb%3D2%26w%3D203%26h%3D100%26yaw%3D339.28687%26pitch%3D0%26thumbfov%3D100!7i13312!8i6656` - -See what I mean about it being messy? - -Though like learning a language, once you understand the structure, the rest tends to fall into place. So let's learn the _language_ of the Google Street View URL. - -The first part (`@51.5083663,-0.1114931`) is the `latitude` and `longitude` of the photo. This is fixed for the photo. - -The following three values differ depending on the zoom and orientation selected by the user. As you move around inside the image, watch how these values change in the URL with each movement. - -The field of view (`75y`) defines the zoom level between 1 (max) and 90 (min). 75 is the default value Google uses. - -The heading (`260.89h`) can be seen next. Measured between >=0 and <360. - -The final part (`78.32t`) is the pitch of the view (measured between 1 and 179). - -Following the orientation and position information, the actual image data (`data=`) is defined. - -Google seems to employ a number of bangs (!) in the `data` section of the URL. Each segment of the attribute is preceded by "!", a number from 1 - 9, and a letter (m, e, etc). - -In this example there are 8 bangs... - -``` -data= -!3m7 -!1e1 -!3m5 -!1sjJXYsBpPPqWuvSR5RUaIEQ -!2e0 -!6s//geo2.ggpht.com/cbk?panoid=jJXYsBpPPqWuvSR5RUaIEQ&output=thumbnail&cb_client=maps_sv.tactile.gps&thumb=2&w=203&h=100&yaw=339.28687&pitch=0&thumbfov=100 -!7i13312 -!8i6656 -``` - -...only one that is _human readable_. - -The `data` values are [encoded](https://www.w3schools.com/tags/ref_urlencode.ASP). - -For example, `%3D` decoded is `=`. - -So; `panoid%3DjJXYsBpPPqWuvSR5RUaIEQ` when decoded is `panoid=jJXYsBpPPqWuvSR5RUaIEQ` - -[Fully decoded](https://www.urldecoder.org/) the Street View URL above becomes: - -`https://www.google.com/maps/@51.5083663,-0.1114931,2a,75y,260.89h,78.32t/data=!3m7!1e1!3m5!1sjJXYsBpPPqWuvSR5RUaIEQ!2e0!6s//geo2.ggpht.com/cbk?panoid=jJXYsBpPPqWuvSR5RUaIEQ&output=thumbnail&cb_client=maps_sv.tactile.gps&thumb=2&w=203&h=100&yaw=339.28687&pitch=0&thumbfov=100!7i13312!8i6656` - -_Decoded URL, hence will not load correctly._ - -Looking at the human-readable bang specifically (broken into new lines to make it easier to read): - -``` -!6s//geo2.ggpht.com/cbk?panoid=jJXYsBpPPqWuvSR5RUaIEQ -&output=thumbnail -&cb_client=maps_sv.tactile.gps -&thumb=2 -&w=203 -&h=100 -&yaw=339.28687 -&pitch=0 -&thumbfov=100 -``` - -You can see a host (`geo2.ggpht.com/cbk`) with a `panoid=` value. This is the unique reference to the panoramic image. - -After that we see `&output=thumbnail` followed by information that seems to be related to said output (a thumbnail image). I assume this because; `&w=203&h=100` looks to refer to width (203px) and height (100px) as well as values like `yaw=` and `pitch=` that remain static and likely set the view of the thumbnail, unlike the first part of the URL which changes when the user moves the view. - -Though I have no idea what the `&output=thumbnail` actually refers too -- it can be entirely removed from the URL and the image will still load correctly in the browser; - -`https://www.google.co.uk/maps/@51.5083663,-0.1114931,2a,75y,260.89h,78.32t/data=!3m7!1e1!3m5!1sBUnezD_ki4oX_PDm2A1lWw!2e0!6s%2F%2Fgeo0.ggpht.com%2Fcbk%3Fpanoid%3DjJXYsBpPPqWuvSR5RUaIEQ!7i13312!8i6656` - -Let's compare another decoded URL on the same stretch of footpath, also captured by Google to see if it offers any clues to the missing information. - - - -`https://www.google.com/maps/@51.5082164,-0.1125885,2a,75y,101.34h,88.6t/data=!3m7!1e1!3m5!1s7YsnZ32rM6gi8Ivi2k3viA!2e0!6s//geo0.ggpht.com/cbk?panoid=7YsnZ32rM6gi8Ivi2k3viA&output=thumbnail&cb_client=maps_sv.tactile.gps&thumb=2&w=203&h=100&yaw=340.40964&pitch=0&thumbfov=100!7i13312!8i6656` - -_Decoded URL, hence will not load correctly._ - -The 8 bangs for this URL: - -``` -data= -!3m7 -!1e1 -!3m5 -!1s7YsnZ32rM6gi8Ivi2k3viA -!2e0 -!6s//geo0.ggpht.com/cbk?panoid=7YsnZ32rM6gi8Ivi2k3viA&output=thumbnail&cb_client=maps_sv.tactile.gps&thumb=2&w=203&h=100&yaw=340.40964&pitch=0&thumbfov=100 -!7i13312 -!8i6656 -``` - -Almost all the bang values are identical to the first image, except inside the 6th bang where the `panoId` and the `yaw` differ, and perhaps most interestingly the value in the 4th bang (1st URL = `!1sjJXYsBpPPqWuvSR5RUaIEQ` / 2nd URL = `!1s7YsnZ32rM6gi8Ivi2k3viA`) - -Google Street View Place name - -Initially I thought the 4th bang might be referring to the [Google Place ID of the image](/blog/place-id-google-street-view), however both images show "The Queen's Walk" as the place, [and a lookup](https://developers.google.com/maps/documentation/javascript/examples/places-placeid-finder) of the `place ID` for the "The Queen's Walk" returns `EixUaGUgUXVlZW4ncyBXYWxrLCBTb3V0aCBCYW5rLCBMb25kb24gU0UxLCBVSyIuKiwKFAoSCekLrB7HBHZIEQ5z1qt4guCgEhQKEgk_J5VMtgR2SBFYYta2uf8XCQ`. - -Digging deeper, lets take another Street View image taken by Google and it's decoded URL: - - - -`https://www.google.co.uk/maps/@45.8326327,6.8634657,2a,75y,68.65h,91.77t/data=!3m7!1e1!3m5!1sk92ptfUShdo9lKo5PYMGew!2e0!6s//geo1.ggpht.com/cbk?panoid=k92ptfUShdo9lKo5PYMGew&output=thumbnail&cb_client=maps_sv.tactile.gps&thumb=2&w=203&h=100&yaw=10.15913&pitch=0&thumbfov=100!7i13312!8i6656` - -_Decoded URL, hence will not load correctly._ - -The 8 bangs for this URL: - -``` -data= -!3m7 -!1e1 -!3m5 -!1sk92ptfUShdo9lKo5PYMGew -!2e0 -!6s//geo1.ggpht.com/cbk?panoid=k92ptfUShdo9lKo5PYMGew&output=thumbnail&cb_client=maps_sv.tactile.gps&thumb=2&w=203&h=100&yaw=10.15913&pitch=0&thumbfov=100 -!7i13312 -!8i6656 - -``` - -Again, only the the 6th bang (`panoId` and `yaw`) and the 4th bang change (1st URL = `!1sjJXYsBpPPqWuvSR5RUaIEQ` / 2nd URL = `!1s7YsnZ32rM6gi8Ivi2k3viA` / 3rd URL = `!1sk92ptfUShdo9lKo5PYMGew`). - -I'm still non-the wiser. - -This time, let's take a look at user uploaded content -- one of our panoramas (versus Google content): - - - -`https://www.google.co.uk/maps/@51.2895639,-0.8214139,3a,75y,1.74h,90t/data=!3m8!1e1!3m6!1sAF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX!2e10!3e11!6shttps://lh5.googleusercontent.com/p/AF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX=w203-h100-k-no-pi0-ya23.025953-ro0-fo100!7i5760!8i2880` - -_Decoded URL, hence will not load correctly._ - -The *9 bangs* for this URL: - -``` -data= -!3m8 -!1e1 -!3m6 -!1sAF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX -!2e10 -!3e11 -!6shttps://lh5.googleusercontent.com/p/AF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX=w203-h100-k-no-pi0-ya23.025953-ro0-fo100 -!7i5760 -!8i2880 -``` - -It seems user content is handled differently. Firstly, the URL is made up of 9 bangs. - -This time the 1st (`!3m8`), 3rd (`!3m6`), 4th (`!1sAF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX`), 6th NEW (`!3e11`), 7th (pano info), 8th (`!7i5760`), and 9th (`!8i2880`) bangs are all different to the previous image. - -The pano info (7th bang, equivalent to the 6th bang for Google uploaded images) is also considerably different: - -`!6shttps://lh5.googleusercontent.com/p/AF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX=w203-h100-k-no-pi0-ya23.025953-ro0-fo100` - -This time the host is different (`googleusercontent.com` vs `ggpht.com`). The `panoid` is defined after the `/p/` (in this case `AF1QipP8Umrvyj6jz7HQFCJ1IGxUFo4tSfZdciNHyBqX` vs. `panoid=`), and the `output=thumbnail` argument is replaced with what looks to be a reference to a thumbnail but in a different structure containing references to (_I think_) width (`w203`), height (`h100`), pitch (`pi0`), yaw (`ya23.025953`), and field of view (`fo100`). - -I think `ro0` refers to the roll value (my assumption is that Google images omit this value because roll is accounted for by Google pre-upload and thus always equal to 0). - -I have no idea about what `k-no` defines though. - -Now, one final Street View image to use as a comparison, this time another piece of user contributed imagery, this time from Federico Debetto, and the Zanzibar Street View project: - - - -`https://www.google.co.uk/maps/@-6.2159039,39.2119878,3a,75y,4.95h,93.34t/data=!3m8!1e1!3m6!1sAF1QipNZxgmALRIjjkbvUWBAOJYLi6niVs3pCosZ5ul3!2e10!3e11!6shttps://lh5.googleusercontent.com/p/AF1QipNZxgmALRIjjkbvUWBAOJYLi6niVs3pCosZ5ul3=w203-h100-k-no-pi-18.636326-ya45.429626-ro-1.5321815-fo100!7i7680!8i3840` - -_Decoded URL, hence will not load correctly._ - -``` -data= -!3m8 -!1e1 -!3m6 -!1sAF1QipNZxgmALRIjjkbvUWBAOJYLi6niVs3pCosZ5ul3 -!2e10 -!3e11 -!6shttps://lh5.googleusercontent.com/p/AF1QipNZxgmALRIjjkbvUWBAOJYLi6niVs3pCosZ5ul3=w203-h100-k-no-pi-18.636326-ya45.429626-ro-1.5321815-fo100 -!7i7680 -!8i3840 -``` - -Compared to the Trek View user uploaded image analysed previously, this time the 4th (`!1sAF1QipNZxgmALRIjjkbvUWBAOJYLi6niVs3pCosZ5ul3` -), 7th (pano info), 8th (`!7i7680`) and 9th (`!8i3840`) bangs differ. - -And I give up. - -I'm not sure my own trial and error is getting me any closer to becoming _fluent_ in Street View URL bangs!. - -My guess is that some of these bangs refer to my Google account and other user specific variables (like browser, device, type of Google Maps app...) in addition to the image location. - -Hopefully some enterprising reader might be able to help me complete this puzzle... \ No newline at end of file diff --git a/_posts/2020-01-10-underwater-google-street-view.md b/_posts/2020-01-10-underwater-google-street-view.md deleted file mode 100644 index cde965d..0000000 --- a/_posts/2020-01-10-underwater-google-street-view.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -date: 2020-01-10 -title: "Underwater Street View" -description: "The challenges we've faced and resulting solutions when capturing underwater 360-degree tours." -categories: guides -tags: [Google, Street View, 360Bubble, HoudahGeo, GPS, dive, GoPro, Fusion, Max, Ricoh, Theta, Garmin, Virb, Samsung, Gear 360, Insta360, ONE] -author_staff_member: dgreenwood -image: /assets/images/blog/2020-01-10/street-view-underwater-meta.jpg -featured_image: /assets/images/blog/2020-01-10/street-view-underwater-sm.jpg -layout: post -published: true -redirect_from: - - /blog/2020/underwater-google-street-view ---- - -**The challenges we've faced and resulting solutions when capturing underwater 360-degree tours.** - -Unusual creatures, amazing geological features, sunken treasure... - -Our oceans are amazing places. - -Despite this, - -> More than eighty percent of our ocean is unmapped, unobserved, and unexplored. Much remains to be learned from exploring the mysteries of the deep - -[As reported by the National Ocean Service (US)](https://oceanservice.noaa.gov/facts/exploration.html). - -Knowing the GoPro Fusion's on our Trek Packs are waterproof, we took a boat out to sea to start shooting. - -Though we quickly ran into problems... - -## Water changes the effect of light - -**Problem** - -Without going _too deep_, having water directly on the GoPro Fusion's causes light refraction issues. - -Light behaves differently in water and differences in light on either side of the camera cause issues during stitching, including stitch lines in the photo and focus problems (blurring of photo). - -To solve these problems, a clear barrier between the water and camera is required. The cleanest way to achieve this is with a "fishbowl design". - -**Solution** - -[Introducing the 360 Bubble](https://360bubble.co/). - -Trek View GoPro Fusion Bubble mount - -It's suitable for GoPro Fusion, GoPro Max, Ricoh Theta S/V, Samsung Gear 360/Gear 360 2017, Garmin Virb, Insta360 ONE, Nikon Keymission 360 and many other consumer 360 cameras and rated for use to a depth of 10 metres. - -The downsides; it's a little bulky to put in your hand luggage (and requires a hard case when placed in checked luggage). And the price. - -## GPS does not work underwater - -**Problem** - -GPS does not work underwater, especially in salt-water. - -The reason; radio signals do not propagate very far underwater. The heavy salt-water just makes this worse. - -Take a GPS receiver, place it just 30 cm underwater, and it'll probably lose its lock on to any GPS satellites. - -Despite the GoPro Fusion being waterproof rated to 10m, this does not include GPS functionality. - -This means you'll need a receiver on the surface to capture a GPS track of your dive. - -As always, we needed a cheap and easy solution. - -**Solution** - -Trek View GPS Swim Buoy - -After some experimentation we used a swim buoy with a GPS receiver watch inside it tied to the diver using a dive rope (we were diving in an open area). - -We chose a cheap [swim buoy](https://www.amazon.co.uk/gp/product/B07DDCMYYZ/) and [GPS receiver with a good chipset](https://www.amazon.com/Columbus-P-1-Professional-Data-Logger/dp/B07MD6TWW9). You could also use a smartphone to track the surface GPS log (we were a little worried about getting ours wet!). - -Don't worry about placing the GPS tracker in the dry buoy -- [GPS will work through plastic](https://blog.mapspeople.com/gps-the-complete-guide). - -On the boat we turn on the GoPro Fusion, setting it to 5 second timelapse mode without GPS. - -When the diver gets in the water, we turn on the watch to capture location every second. - -As the diver moves in the water, they keep the buoy string tight to keep it as close to overhead as possible. - -It's not perfect, and does not capture depth (we're currently experimenting with dive watch data) but works surprisingly well when out in the harsh conditions of the ocean. - -The only issue we've suffered is the camera overheating in the bubble after around 2 hours of continuous shooting (due to the lack of airflow in the 360 Bubble). We've found no solution to this issue yet, and know all housings will suffer the same issue. - -## Photo EXIF data and GPS data are separate - -**Problem** - -After capturing the photos and GPS track, you'll need to geotag the photos (add GPS co-ordinates to photos EXIF data). - -Even 1 minute of footage (at 5 second timelapse) will produce 12 photos that need to be geotagged. - -[Luckily there are lots of software tools that can automate the geotagging of images](https://havecamerawilltravel.com/photographer/geotagging-software/). - -**Solution** - -HoudahGeo5 - -[We use HoudahGeo 5](https://www.houdah.com/houdahGeo/) to geotag our photos and believe it is well worth the $50 price tag (there is a free trial). - -The geotagging process is fairly straightforward and HoudahGeo 5 gives you an enormous amount of control over the geotagging process. - -GoPro Fusion studio looks for GPS co-ordinates in the front images (identified by prefix `GF`), therefore you need to geotag only the front facing images. - -For our requirements, we set HoudahGeo 5 to match the time of the photo to the timestamp of GPS co-ordinates for geotagging. Clearly this means that it is vital the camera and GPS receiver clocks are in-sync (although HoudahGeo does allow you to correct time offsets between devices if they deviate -- a very useful feature). - -## Google Street View does not support underwater photos - -**Problem** - -This is not true. Google Street View does support underwater 360-degree photos. - -**Solution** - -Google Street View underwater - -[Here are some of the dive sites already captured on Google Street View to inspire you](https://www.google.com/streetview/gallery/#oceans/). - -Many of these were captured by [the Underwater Earth team](https://www.underwater.earth) (who inspired this post -- [check out their cameras](https://www.underwater.earth/gallery)!). - -## Become a Trekker - -Our oceans are changing quickly. - -From temperatures bleaching coral reefs to plastic pollution collecting in giant "garbage patches". - -We need to do a lot more to protect our oceans, and that starts through education. - -Help us capture underwater images to inspire others to get involved in marine conservation efforts and keep a record of our oceans health. \ No newline at end of file diff --git a/_posts/2020-09-18-playing-with-mapillary-api.md b/_posts/2020-09-18-playing-with-mapillary-api.md deleted file mode 100644 index cc6fd9e..0000000 --- a/_posts/2020-09-18-playing-with-mapillary-api.md +++ /dev/null @@ -1,310 +0,0 @@ ---- -date: 2020-09-18 -title: "Playing with the Mapillary API" -description: "A quick look at some of the API queries we've used against the Mapillary API." -categories: developers -tags: [Mapillary, object detection] -author_staff_member: dgreenwood -image: /assets/images/blog/2020-09-18/mapillary-object-detections-meta.jpg -featured_image: /assets/images/blog/2020-09-18/mapillary-api-object-detections.jpg -layout: post -published: true -redirect_from: - - /blog/2020/playing-with-mapillary-api ---- - -**A quick look at some of the API queries we've used against the Mapillary API.** - -[Mapillary's Chris Beddow wrote a brilliant blog post about getting started with the Mapillary API](https://blog.mapillary.com/update/2020/08/28/map-data-mapillary-api.html). - -I wanted to add some of our favourite API requests for uncovering interesting data. - -We recently ran a mapping party in the New Forest, UK. - -The photos are great to explore interactively to get a feel for the visual beauty of the area. - -A location can be requested via the Mapillary API using a bounding box (`bbox`). - -There are lots of tools to calculate coordinates for a bounding box programmatically. - -A useful web tool for doing this is [boundingbox.klokantech.com](https://boundingbox.klokantech.com/). - -Draw a bounding box - -A bounding box is a rectangular box that can be determined by the x and y axis coordinates in the upper-left corner and the x and y axis coordinates in the lower-right corner of the rectangle. - -In mapping terms we can use latitude and longitude for position. - -``` -bounding_box=[min_longitude,min_latitude,max_longitude,max_latitude.] -``` - -My bounding box for the New Forest is: - -``` -bounding_box=-1.774513671,50.7038878539,-1.2966083975,50.9385637585 -``` - -You'll notice I drew a polygon on the map, that was converted to a rectangle. This is because bounding boxes are always rectangular. Bounding polygons do exists, but the Mapillary API search endpoints not accept them. - -Important note: I would not typically use a bounding box this large against the Mapillary API, especially when a large volume of images are likely (e.g. in cities) because such a query will be slow and prone to errors. - -I'd recommend: - -1. resize your bounding box (and make smaller systematic requests across an area) or, -2. in the case of our mapping party where we know some variables, you can use other parameters instead of bbox to filter images, like `image_keys`, `organization_keys`, `userkeys`, `usernames`, `start_time` or `end_time` where these values are known, then filtering images locally on your machine after getting a response. - -Now we know where we want to analyse, we can run some queries against the Mapillary API. - -[I will assume you've already read Chris' post which will show you the basics of forming a request to the Mapillary API](https://blog.mapillary.com/update/2020/08/28/map-data-mapillary-api.html). - -One thing to note, you'll see I use the `per_page=500` in my requests. This is saying, only show 500 records in the response and then paginate. In many cases, more than 500 total records will be returned. In which case you can increase to the maximum records allowed `per_page=1000` (it will be slower, and potentially still to small) or use some logic to iterate through each page. - -## Object detections vs. features - -First it's important to distinguish between object detections and features in Mapillary. - -Mapillary Object Detections - -[Object detections](https://help.mapillary.com/hc/en-us/articles/115000967191-Object-detections) are areas (x,y,x,y,x,y... co-ordinates) of an image that have been detected as a certain object. For example, you can see each pixel in the above image corresponds to an object detection, like a car or road surface. - -Whereas features can be thought of as single points on a map. For example, a car has been detected (by Mapillary object detections) in an image and has been assigned a real world latitude and longitude value. - -Mapillary Features - -[Mapillary features](https://help.mapillary.com/hc/en-us/articles/115002332165) use multiple photos to determine an objects position using triangulation. - -For example, 3 images contain a photo of a car. Using the position of each of those photos (latitude and longitude reported in metadata), Mapillary can estimate the actual position of the car and then assign it to that feature. As such, features will usually have different co-ordinates from the images used to detect it. - -In the last image above, I've tried to demonstrate an object (traffic light) being identified in 6 images from which a features position has been determined. - -### Wildlife (object detections) - -Let's start with the [`object_detections`](https://www.mapillary.com/developer/api-documentation/#object-detections) endpoint so we can identify photos that contain wildlife of some sort. - -Wild ponies can be found all over the New Forest and are a draw for visitors. Being so close to the coast, it's also a great place for bird watching. Let's take an _automated_ look, for them (`animal--bird&animal--ground-animal`): - -``` -curl "https://a.mapillary.com/v3/object_detections/segmentations?client_id=" \ - &values=animal--bird,animal--ground-animal \ - &bbox=1.774513671,50.7038878539,-1.2966083975,50.9385637585 \ - &per_page=500 -``` - -I'll get a response containing all the images (Mapillary image keys) with the wildlife defined in my query (`animal--bird`, `animal--ground-animal`) I can then take a look at. - -If I wanted to filter only images belonging to participants of our mapping party I could also use the parameter `usernames=`. Or should they all belong to the Mapillary Trek View organisation (they do), I could use the parameter `organization_keys=`. - - -### Seasonal vegetation (object detections) - -It's a truly beautiful place in the late-summer to visit, with the leaves still in full bloom. - -One of the things we want to do is run another mapping party in the winter to show the visual seasonal differences. - -This will also make it possible to provide a rough estimate of canopy cover in summer compared to winter (using the sum area of vegetation reported by the Object Detection endpoints for a position -- keep reading for more info). - -Problem is, it's not possible to pass date parameters to the `object_detections` endpoint. Therefore to compare summer and winter, we need to run two queries. - -First we need to find all images in the bounding box that were `captured_at` in the summer months. [For this we can use the `images` endpoint](https://www.mapillary.com/developer/api-documentation/#images): - -``` - -curl "https://a.mapillary.com/v3/images?client_id=" - \ - &start_time=2020-04-01 \ - &end_time=2020-09-30 \ - &bbox=1.774513671,50.7038878539,-1.2966083975,50.9385637585 \ - &per_page=500 -``` - -Here I'm defining summer as the 6 months between the start of April (`2020-04-01`) and the last day of September (`2020-09-30`). - -And here's what the response might look like: - -``` -{ - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "properties": { - "ca": 0, - "camera_make": "", - "camera_model": "", - "captured_at": "1970-01-01T00:00:03.000Z", - "key": "QKCxMqlOmNrHUoRTSrKBlg", - "pano": true, - "sequence_key": "KrZrFFEzBszRJTaBvK3aqw", - "user_key": "T8XscpSs_3_W673i7WFQug", - "username": "underhill", - "organization_key": "GWPwbGxhu82M5HiAeeuduH", - "private": false - }, - "geometry": { - "type": "Point", - "coordinates": [ - -135.6759679, - 63.8655195 - ] - } - } -} -``` - -This will return a lot of images each with a `features.properties.key` value. This is the unique Mapillary image key of photos taken that match the specified criteria. - -We can now use these images keys against the `object_detections` endpoint. - -``` -https://a.mapillary.com/v3/object_detections/instances?client_id=YOUR CLIENT ID \ - &image_keys=KEY_1,KEY_2,... - &values=nature--vegetation - &per_page=500 -``` - -A snippet of a response: - -``` -{ - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "properties": { - "area": 0.0010786056518554688, - "captured_at": "2020-02-03T10:22:50.000Z", - "image_ca": 280.39, - "image_key": "---wuOuOEBSdTC1FT_cOwA", - "image_pano": false, - "key": "kex97g0i6zc8t48rdd0lbu", - "score": 0.6509804129600525, - "shape": { - "type": "Polygon", - "coordinates": [ - [ - [ - 0.7685546875, - 0.6494140625 - ], - [ - 0.7939453125, - 0.6494140625 - ], - [ - 0.7939453125, - 0.69189453125 - ], - [ - 0.7685546875, - 0.69189453125 - ], - [ - 0.7685546875, - 0.6494140625 - ] - ] - ] - }, - "value": "nature--vegetation" - }, - "geometry": { - "type": "Point", - "coordinates": [ - 13.010694722222222, - 55.59290138888889 - ] - } - }, - ... - ] -} -``` - -The `features.shape.coordinates` shows a polygon of the outline of the `nature--vegetation` object. - -By calculating the area of all polygons with this object we can get an idea of how much foliage blooms on deciduous vegetation in these areas over the summer months (and subsequently how much is shed during winter). - -IMPORTANT: as noted the `object_detections` endpoint returns detections in each photo (not by individual object). In almost all cases a tree will be covered in more than one photo. Therefore the sum of areas for every image will include the same object counted potentially many times. - -Our plan is to capture images of the same paths, however, this still makes a like-for-like comparison almost impossible -- unless you can capture images in the winter in exactly the same place. - -My _crude_ fix, take the count of photos returned in summer and winter and weight by number of images in the sample. For example, if 1000 photos are captured in summer and 500 in winter I will times the sum of area for summer by 0.5 (to account for the 50% reduction in image count in winter). Unless you can suggest an improved methodology (please!)? - -### Benches (map features) - -Let's now take a look at the map [`features_endpoint`](https://www.mapillary.com/developer/api-documentation/#map-features) that returns locations of objects as point features on the map. As noted before, this is the actual real-world position of the object (based on photo co-ordinates). - -As walking takes up a lot of energy, let's search for the exact position of a bench (`object--bench`) to sit on... - -``` -curl "https://a.mapillary.com/v3/map_features?client_id=" \ - &layers=point - &values=object--bench \ - &bbox=1.774513671,50.7038878539,-1.2966083975,50.9385637585 \ - &per_page=500 -``` - -Here's what the response might look like: - -``` -{ - "type": "FeatureCollection", - "features": [ - { - "type": "Feature", - "properties": { - "accuracy": 4.3415914, - "altitude": 2.1636841, - "detections": [ - { - "detection_key": "o8kc5kth6o7m4f5eosebin7od5", - "image_key": "tsWJOYler98YAmg97kUzLQ", - "user_key": "KXcBzSdwPIGrXgEP8qdcEQ" - }, - { - "detection_key": "gq9da6nqktj5rklq5vrc4q8sul", - "image_key": "39JmaDL9LxsaijLV7n2Ccg", - "user_key": "KXcBzSdwPIGrXgEP8qdcEQ" - } - ], - "direction": 335.4141, - "first_seen_at": "2020-07-25T20:01:43.850Z", - "key": "8yfe7htuqtcd2vrjsxucidqkpl", - "last_seen_at": "2020-08-08T10:49:33.648Z", - "layer": "lines", - "value": "object--bench" - }, - "geometry": { - "coordinates": [ - 1.774513671, - 50.7038878539 - ], - "type": "Point" - } - }, - ... - ] -} -``` - -In it we see two `feature.detections` (`detection_key=o8kc5kth6o7m4f5eosebin7od5` and `detection_key=gq9da6nqktj5rklq5vrc4q8sul`) of an `object--bench` in two respective images (`"image_key": "tsWJOYler98YAmg97kUzLQ"` and `"image_key": "39JmaDL9LxsaijLV7n2Ccg"`). - -This entry is showing a single physical bench located at `latitude=1.774513671` and `longitude=50.7038878539` and present in the two images listed (that were used to work out its position). - -More detections are returned, but for brevity I have omitted from the response printed in this post (`...`). - -**A note on feature types** - -[Mapillary has three types of features it can detect](https://help.mapillary.com/hc/en-us/articles/115002332165): - -* point features: 42 types of objects as point features (i.e. map features that are represented by points on the map, like fire hydrants or street lights). - * _In the example I used earlier, I've shown the feature `object--bench` which has a type `"type": "Point"` (point feature)._ -* traffic signs: 1,500 different traffic sign classes (also extracted as point features). -* line features: 19 object classes are extracted as line features (map features represented by lines, such as guardrails or lanes). - -## Your Use-cases... - -Hopefully this gives you a few more ideas to build on Chris' post. I also want to say a big thank you to Chris for proof-reading this post, and providing very valuable feedback. - -Please do share the use-cases you're using the Mapillary API for -- I can’t wait to see what other projects are being built on top of it. \ No newline at end of file diff --git a/assets/images/blog/2020-01-03/google-street-view-place-name.png b/assets/images/blog/2020-01-03/google-street-view-place-name.png deleted file mode 100644 index 3c70174..0000000 Binary files a/assets/images/blog/2020-01-03/google-street-view-place-name.png and /dev/null differ diff --git a/assets/images/blog/2020-01-03/google-street-view-url-meta.jpg b/assets/images/blog/2020-01-03/google-street-view-url-meta.jpg deleted file mode 100644 index c59436b..0000000 Binary files a/assets/images/blog/2020-01-03/google-street-view-url-meta.jpg and /dev/null differ diff --git a/assets/images/blog/2020-01-03/google-street-view-url-sm.png b/assets/images/blog/2020-01-03/google-street-view-url-sm.png deleted file mode 100644 index 6e7f4e1..0000000 Binary files a/assets/images/blog/2020-01-03/google-street-view-url-sm.png and /dev/null differ diff --git a/assets/images/blog/2020-01-10/HoudahGeo5-Screenshot-Automatic.jpg b/assets/images/blog/2020-01-10/HoudahGeo5-Screenshot-Automatic.jpg deleted file mode 100644 index 942c903..0000000 Binary files a/assets/images/blog/2020-01-10/HoudahGeo5-Screenshot-Automatic.jpg and /dev/null differ diff --git a/assets/images/blog/2020-01-10/google-street-view-underwater-diving.png b/assets/images/blog/2020-01-10/google-street-view-underwater-diving.png deleted file mode 100644 index a80b9e3..0000000 Binary files a/assets/images/blog/2020-01-10/google-street-view-underwater-diving.png and /dev/null differ diff --git a/assets/images/blog/2020-01-10/gopro-fusion-360-bubble.jpg b/assets/images/blog/2020-01-10/gopro-fusion-360-bubble.jpg deleted file mode 100644 index 685be7c..0000000 Binary files a/assets/images/blog/2020-01-10/gopro-fusion-360-bubble.jpg and /dev/null differ diff --git a/assets/images/blog/2020-01-10/street-view-underwater-meta.jpg b/assets/images/blog/2020-01-10/street-view-underwater-meta.jpg deleted file mode 100644 index 19791b6..0000000 Binary files a/assets/images/blog/2020-01-10/street-view-underwater-meta.jpg and /dev/null differ diff --git a/assets/images/blog/2020-01-10/street-view-underwater-sm.jpg b/assets/images/blog/2020-01-10/street-view-underwater-sm.jpg deleted file mode 100644 index c72460f..0000000 Binary files a/assets/images/blog/2020-01-10/street-view-underwater-sm.jpg and /dev/null differ diff --git a/assets/images/blog/2020-01-10/trek-view-swim-buoy-gps-reciever-1.jpg b/assets/images/blog/2020-01-10/trek-view-swim-buoy-gps-reciever-1.jpg deleted file mode 100644 index a0e4379..0000000 Binary files a/assets/images/blog/2020-01-10/trek-view-swim-buoy-gps-reciever-1.jpg and /dev/null differ diff --git a/assets/images/blog/2020-09-18/bounding-box-draw-web.jpg b/assets/images/blog/2020-09-18/bounding-box-draw-web.jpg deleted file mode 100644 index 55eb56f..0000000 Binary files a/assets/images/blog/2020-09-18/bounding-box-draw-web.jpg and /dev/null differ diff --git a/assets/images/blog/2020-09-18/mapillary-api-object-detections.jpg b/assets/images/blog/2020-09-18/mapillary-api-object-detections.jpg deleted file mode 100644 index a8dcbe6..0000000 Binary files a/assets/images/blog/2020-09-18/mapillary-api-object-detections.jpg and /dev/null differ diff --git a/assets/images/blog/2020-09-18/mapillary-features.png b/assets/images/blog/2020-09-18/mapillary-features.png deleted file mode 100644 index 6913731..0000000 Binary files a/assets/images/blog/2020-09-18/mapillary-features.png and /dev/null differ diff --git a/assets/images/blog/2020-09-18/mapillary-object-detection.png b/assets/images/blog/2020-09-18/mapillary-object-detection.png deleted file mode 100644 index 9d52358..0000000 Binary files a/assets/images/blog/2020-09-18/mapillary-object-detection.png and /dev/null differ diff --git a/assets/images/blog/2020-09-18/mapillary-object-detections-meta.jpg b/assets/images/blog/2020-09-18/mapillary-object-detections-meta.jpg deleted file mode 100644 index 84c51e3..0000000 Binary files a/assets/images/blog/2020-09-18/mapillary-object-detections-meta.jpg and /dev/null differ diff --git a/assets/images/blueprint/trek-view-map-search.jpg b/assets/images/blueprint/trek-view-map-search.jpg new file mode 100644 index 0000000..ab49cef Binary files /dev/null and b/assets/images/blueprint/trek-view-map-search.jpg differ diff --git a/assets/images/blueprint/trek-view-map-trail.jpg b/assets/images/blueprint/trek-view-map-trail.jpg new file mode 100644 index 0000000..a908985 Binary files /dev/null and b/assets/images/blueprint/trek-view-map-trail.jpg differ diff --git a/assets/images/blueprint/trek-view-map.jpg b/assets/images/blueprint/trek-view-map.jpg new file mode 100644 index 0000000..6637ee9 Binary files /dev/null and b/assets/images/blueprint/trek-view-map.jpg differ diff --git a/blueprint/index.md b/blueprint/index.md index cb0d5aa..05a3eb8 100644 --- a/blueprint/index.md +++ b/blueprint/index.md @@ -10,8 +10,6 @@ redirect_from:
-

Overview

-

Since the inception of Trek View I've wanted to build a map platform similar to Street View, but designed for adventurers.

I have over 100 terabytes of footage on the map, it's just incredibly hard to search and share.

@@ -22,7 +20,7 @@ redirect_from:

Now I am building it.

-

What's wrong with Street View or Mapillary?

+

What's wrong with Street View or Mapillary?

Nothing.

@@ -36,15 +34,15 @@ redirect_from:

The assumption is you know where you want to drop into the imagery. For looking up what a store front looks like from an address, or if parking is easy, Mapillary and Street View are perfect. For trails, finding viewpoints or way-markers is more important which you can't easily get an address or fixed point to search on.

-

What are the challenges with building a street level image map?

+

What are the challenges with building a street level image map?

What seems like a fairly simple tool, a map with images you can drop into, unravels to be very complex (and expensive) once you get under the hood...

-

Data types/size

+

Data types/size

My backup of GoPro imagery and video is more than 100 Tbs, and growing quickly. That is just imagery I've shot!

-

To give a basic storage estimate using [Amazon S3 storage](https://aws.amazon.com/s3/pricing/) (I know there are cheaper options), it costs $0.023 per GB for storage. So $0.023 * 30000 = $690/mo!

+

To give a basic storage estimate using [Amazon S3 storage](https://aws.amazon.com/s3/pricing/) (I know there are cheaper options), it costs $0.023 per GB for storage. So $0.023 * 100000 = $2300/mo!

These services also charge for bandwidth usage. For example, you pay for requests made against your S3 buckets and objects. Assuming a user views 30 or 40 images per session, the costs get huge!

@@ -54,7 +52,7 @@ redirect_from:

For someone with a budget of less than $100/mo to run this, following the approach of hosting all the data myself is impossible.

-

Database storage

+

Database storage

This is where the complexity can come in. In Street View you have interconnected blue lines. You can jump between images seamlessly in the interface.

@@ -64,7 +62,7 @@ redirect_from:

With time, these queries and processing logic can be tuned, but working spacial data is tough (at least for someone that doesn't work full-time in this area).

-

User interface

+

User interface

Viewing a single 360 in a panoramic viewer like [Panelleum is easy](https://pannellum.org/).

@@ -76,7 +74,7 @@ redirect_from:

In short, trying to build this from scratch to a level that would be acceptable for a user would not be easy.

-

My hacky plan...

+

My hacky plan

I'm used to working with limitations like this, and quite enjoy it.

@@ -89,33 +87,48 @@ redirect_from:
  • They offer an open-source panoramic browser MapillaryJS -- as used in Mapillary web
  • -

    With this in mind, I am using Mapillary as a backend as follows...

    +

    With this in mind, I decided to use Mapillary as a backend as follows...

      -
    1. allow user to upload photos or videos via a Trek View web application to Mapillary
    2. -
    3. the Mapillary processed metadata for each sequence uploaded is stored in Trek View web application
    4. -
    5. a user views images in Trek View web application (the images themselves are loaded from Mapillary servers)
    6. +
    7. allow user to upload photos or videos via a Trek View web application direct to Mapillary
    8. +
    9. the Mapillary processed metadata for each sequence uploaded is stored in Trek View web application, but the actual images live on the Mapillary server
    10. +
    11. a user requests images using the metadata stored in the Trek View web application database, but the images themselves are loaded from Mapillary servers (the URL of which lives in the metadata inside the Trek View database)
    -

    From here I can do neat things like:

    +

    The massive risk of this plan

    -
      -
    • Allow users to group sequences into larger ones (e.g. to create an entire trail)
    • -
    • Expose a search and filter on the map
    • -
    • Show weather, air quality, etc for each image (and also expose via search)
    • -
    • Use "adventure" specific views, including elevation, length of sequence, descriptive info, etc.
    • -
    +

    The obvious risk with this plan, and I hate it, is that Facebook (aka Mapillary) can kill this product at anytime.

    -

    Here are some mockups I've created to try and illustrate what I have in my mind;

    +

    If Facebook shut-down Mapillary entirely or simply stop allowing users to upload or retrieve images for free my product is dead in the water.

    - +

    As this is a hobby project which I'm quite happy to use as a learning experience I am reluctantly happy to overlook this issue. However, I do expect to wake up one day for my map to be broken. To be clear, if this was a commercial project, I would not proceed any further with this approach (I'm putting this warning here for the many I have spoken to considering some form of competing commercial product to Street View or Mapillary. You have been warned!).

    -

    The massive risk of this plan

    +

    The proposed app

    -

    The obvious risk with this plan, and I hate it, is that Facebook (aka Mapillary) can kill this product at anytime.

    +

    Once I have this data in a database I can build neat features on-top of it including:

    -

    If Facebook shut-down Mapillary entierly or simply stop allowing users to upload or retrieve images for free my product is dead in the water.

    +

    Adventure specific search

    -

    As this is a hobby project which I'm quite happy to use as a learning experience I am reluctantly happy to overlook this issue. However, I do expect to wake up one day for my map to be broken. To be clear, if this was a commercial project, I would not proceed any further with this approach (I'm putting this warning here for the many I have spoken to considering some form of competing commercial product to Street View or Mapillary. You have been warned!).

    +

    Trek View Map Search

    + +

    Search and filter sequences you want to see. Filter by the activity. Filter by the time of year they were captured. Filter by the weather...

    + +

    Enrich sequences

    + +

    Add additional metadata to each sequence, including weather and air quality, sowing this to the user when viewing a sequence, and also exposing it via search.

    + +

    More intuitive navigation of imagery

    + +

    Trek View Map

    + +

    Using trail specific views, including navigating the images in a sequence by elevation.

    + +

    Grouping of sequences

    + +

    Trek View Map Trail

    + +

    Image taken from Ride with GPS

    + +

    Allow users to group sequences into larger ones (e.g. to create an entire trail) and add descriptive information to aid users considering visiting the trail themselves.

    \ No newline at end of file