-
Notifications
You must be signed in to change notification settings - Fork 51
/
MusaMasterclass.Rmd
1288 lines (909 loc) · 60.9 KB
/
MusaMasterclass.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
output: github_document
editor_options:
chunk_output_type: console
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(echo = TRUE, cache = TRUE, fig.width = 7, fig.height = 7, fig.align = "center", dpi = 143)
```
# Penn MUSA Masterclass: 3D Mapping and Visualization with R and Rayshader
Tyler Morgan-Wall (@tylermorganwall), Institute for Defense Analyses
Personal website: https://www.tylermw.com
Rayshader website: https://www.rayshader.com
Github: https://www.github.com/tylermorganwall
First, let's install all the required packages for this masterclass. I would recommend R 3.6.0 as a minimum requirement for this class--lower versions of R have a different RDS version, and if you have trouble with some of the raster operations you won't be able to load the back-up RDS files.
```{r install, warning=FALSE, message=FALSE}
# install.packages(c("ggplot2","raster", "rayrender", "spatstat", "spatstat.utils","suncalc","here", "sp","lubridate","rgdal", "magick", "av","xml2", "dplyr"))
#macOS: xcrun issue? type "xcode-select --install" into terminal
#macOS imager.so or libx11 errors? Install X11: https://www.xquartz.org
# install.packages("remotes")
# remotes::install_github("giswqs/whiteboxR")
# whitebox::wbt_init()
# remotes::install_github("tylermorganwall/rayshader")
```
Now load the packages:
```{r library, warning=FALSE, message=FALSE}
options(rgl.useNULL = FALSE)
library(ggplot2)
library(whitebox)
library(rayshader)
library(rayrender)
library(raster)
library(spatstat)
library(spatstat.utils)
library(suncalc)
library(sp)
library(lubridate)
library(rgdal)
setwd(here::here())
```
# Hillshading with Rayshader
We're going to start by making our own 2D maps with rayshader. Here, we load in a DEM of the River Derwent in Hobart, Tasmania. We're going to plot it using a few different methods, and then we're going to go into the third dimension.
Let's take a raster object and extract a bare R matrix to work with rayshader. To convert the data from the raster format to a bare matrix, we will use the rayshader function `raster_to_matrix()`.
```{r loadhobart, fig.width = 4, fig.height = 4}
loadzip = tempfile()
download.file("https://dl.dropboxusercontent.com/s/8ltz4j599z4njay/dem_01.tif.zip", loadzip)
## Alternate Link
#download.file("https://tylermw.com/data/dem_01.tif.zip", loadzip)
hobart_tif = raster::raster(unzip(loadzip, "dem_01.tif"))
unlink(loadzip)
hobart_mat = raster_to_matrix(hobart_tif)
unlink("dem_01.tif")
hobart_mat[1:10,1:10]
```
The structure of this data is simply an array of numbers. We will first visualize the data with the `height_shade()` function, which performs a traditional elevation-to-color mapping.
```{r height_hobart, fig.width = 4, fig.height = 4}
hobart_mat %>%
height_shade() %>%
plot_map()
```
Here, the default uses what R calls `terrain.colors()`, although it doesn't look like any terrain I've ever seen.
Where in nature does color map to elevation? Usually, you get this kind of mapping on large scale topographic features, such as mountains or canyons. Here's a nice illustration from the 1848 book by Alexander Keith Johnston showing what I mean:
![Figure 1: Height maps to elevation](images/topography_color.png)
On smaller scale (more human-sized) features like hills or dunes, the landscape is dominated instead by ambient lighting and how the sun hits the surface. Here's an example from Great Sand Dunes National Park, in Colorado. Note the (sometimes drastic) change in surface color that changes depending on the time of day or direction of the light:
![Figure 2: How lighting affects terrain colors. Great Sand Dunes National Park, CO.](images/sanddunessmall.png)
Rather than color by elevation, let's try coloring the surface by the direction the slope is facing and the steepness of that slope. This is implemented in the rayshader function, `sphere_shade()`. Here's a video explanation of how it works:
[![Link to video](images/videofront.png)](https://www.tylermw.com/wp-content/uploads/2018/06/fullcombined_web.mp4)
Taking data and pluging it into the `sphere_shade()` function, here's what we get:
```{r sphere_hobart, fig.width = 4, fig.height = 4}
hobart_mat %>%
sphere_shade() %>%
plot_map()
```
We note right away that it's much more apparent in this hillshade that there is a body of water weaving between the two mountains. To add water to the scene we'll call two functions in rayshader, `detect_water()` and `add_water()`, that allow you to detect and add bodies of water directly to the map using only the elevation data. `detect_water()` works by looking for large, relatively flat, connected regions in a DEM. It's often the case (although not always) that the areas representing water are far more flat than anything else in the matrix, so we take advantage of that fact to extract and color in the water areas.
Occasionally these functions can result in false positives, but there are tuning parameters to help. `detect_water()` provides the parameters `min_area`, `max_height`, and `cutoff` in `detect_water()` so the user can try to minimize false areas reported as water. `min_area` specifies the minimum number of connected flat points that constitute a body of water. `max_height` sets the maximum height that a region can be considered to be water. `cutoff` determines how vertical a surface as to be to be defined as flat: `cutoff = 1.0` only accepts areas pointing straight up as water, while `cutoff = 0.99` allows for areas that are slightly less than flat to be classified as water. The default is `cutoff = 0.999`.
```{r water_hobart, fig.width = 4, fig.height = 4}
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat)) %>%
plot_map()
```
The default `min_area` is 1/400th the area of the matrix. There's nothing special about this number, but it's a default that tended to work fairly well for several datasets I tested it on--adjust it to suit your dataset. We will deal with elevation data later in the class where the defaults for this function don't work.
You can also start to see why rayshader was built using the pipe operator `%>%`. Here's what the previous code would look like without the pipe:
```{r pipe_example, fig.width = 4, fig.height = 4}
plot_map(add_water(sphere_shade(hobart_mat),detect_water(hobart_mat)))
```
The code is far more readable with the pipe, and the step-by-step process of adding layers to our map is much more easy to see.
Both `sphere_shade()` and `add_water()` come with several built-in palettes, and you can create your own with the `create_texture()` function. Here's the built-in textures (top is highlight color, and all can be specified):
![Figure 3: The various built-in textures in `sphere_shade()`](images/fulltexture.png)
And here's what a sampling of those textures look like (check the documentation for more):
```{r palettes, fig.width = 4, fig.height = 4}
hobart_mat %>%
sphere_shade(texture = "desert") %>%
add_water(detect_water(hobart_mat), color = "desert") %>%
plot_map()
hobart_mat %>%
sphere_shade(texture = "imhof4") %>%
add_water(detect_water(hobart_mat), color = "imhof4") %>%
plot_map()
hobart_mat %>%
sphere_shade(texture = "bw") %>%
add_water(detect_water(hobart_mat), color = "unicorn") %>%
plot_map()
```
Rayshader's name comes one of the methods it uses to calculate hillshades: raytracing, a method that realistically simulates how light travels across the elevation model. Most traditional methods of hillshading only use the local angle that the surface makes with the light, and do not take into account areas that actually cast a shadow. This basic type of hillshading is sometimes referred to as "lambertian", and is implemented in the function `lamb_shade()`. There's one problem with this mapping, though:
```{r lambshade, fig.width = 4, fig.height = 4}
hobart_mat %>%
lamb_shade(zscale = 33) %>%
plot_map()
hobart_mat %>%
lamb_shade(zscale = 33, sunangle = 135) %>%
plot_map()
hobart_mat_inverted = -hobart_mat
hobart_mat %>%
lamb_shade(zscale = 33) %>%
plot_map()
# No difference!
hobart_mat_inverted %>%
lamb_shade(zscale = 33, sunangle = 135) %>%
plot_map()
```
To shade surfaces using raytracing, rayshader draws rays originating from each point towards a light source, specified using the `sunangle` and `sunaltitude` argument. Here are two gifs showing how rayshader calculates shadows with `ray_shade`:
![Figure 4: Demonstration of how ray_shade() raytraces images](images/ray.gif)
Since our data is represented by grid points, rayshader uses a method called "bilinear interpolation" to determine if there are intersections with the ground even between points.
![Figure 5: ray_shade() determines the amount of shadow at a single source by sending out rays and testing for intersections with the heightmap. The amount of shadow is proportional to the number of rays that don't make it to the light.](images/bilinear.gif)
Let's add a layer of shadows to this map, using the `add_shadow()` and `ray_shade()` functions. We layer our shadows to our `lamb_shade()` base layer.
```{r addshadow, fig.width = 4, fig.height = 4}
hobart_mat %>%
lamb_shade(zscale = 33) %>%
add_shadow(ray_shade(hobart_mat, zscale = 33,
sunaltitude = 6, lambert = FALSE), 0.3) %>%
plot_map()
# No longer identical--we can tell canyons from mountains, thanks to raytracing
hobart_mat_inverted %>%
lamb_shade(zscale = 33, sunangle = 135) %>%
add_shadow(ray_shade(hobart_mat_inverted, zscale = 33, sunangle = 135,
sunaltitude = 6, lambert = FALSE), 0.3) %>%
plot_map()
```
We can combine all these together to make our final 2D map:
```{r finalmap, fig.width = 4, fig.height = 4}
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat,zscale = 33, sunaltitude = 3,lambert = FALSE), max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat,zscale = 33,sunaltitude = 3), max_darken = 0.5) %>%
plot_map()
```
We can adjust the highlight/sun direction using the `sunangle` argument in both the `sphere_shade()` function and the `ray_shade()` function.
We'll start with the default angle: 315 degrees, or the light from the Northwest. One question you might ask yourself is: Why 315 degrees?
```{r sundirection, fig.width = 4, fig.height = 4}
#Default angle: 315 degrees.
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat, zscale = 33, sunaltitude = 5,lambert = FALSE),
max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat,zscale = 33,sunaltitude = 5), max_darken = 0.8) %>%
plot_map()
#225 degrees
hobart_mat %>%
sphere_shade(sunangle = 225) %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat,sunangle = 225, zscale = 33, sunaltitude = 5,lambert = FALSE),
max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat,zscale = 33,sunaltitude = 5), max_darken = 0.8) %>%
plot_map()
```
You have complete freedom in choosing how your map is lit. And, as we will show later, you can use the `suncalc` package to bring in the actual position of the sun in the sky for a specific times and locations.
![Figure 6: Here we rotate the light around the scene.](images/colorgif.gif)
We can also add the effect of ambient occlusion, which is sometimes referred to as the "sky view factor." When light travels through the atmosphere, it scatters. This scattering turns the entire sky into a light source, so when less of the sky is visible (e.g. in a valley) it's darker than when the entire sky is visible (e.g. on a mountain ridge).
Let's calculate the ambient occlusion shadow layer for the Hobart data, and layer it onto the rest of the map.
```{r ambient_occlusion, fig.width = 4, fig.height = 4}
hobart_mat %>%
ambient_shade() %>%
plot_map()
#No ambient occlusion
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat, zscale = 33, sunaltitude = 5,lambert = FALSE),
max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat,zscale = 33, sunaltitude = 5), max_darken = 0.7) %>%
plot_map()
#With ambient occlusion
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat, zscale = 33, sunaltitude = 5,lambert = FALSE),
max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat,zscale = 33, sunaltitude = 5), max_darken = 0.7) %>%
add_shadow(ambient_shade(hobart_mat), max_darken = 0.1) %>%
plot_map()
```
We can save these plots straight to disk, without displaying them, using the `save_png()` function instead of `plot_map()`.
```{r savepng}
hobart_mat %>%
sphere_shade() %>%
save_png("filename.png")
unlink("filename.png")
```
# 3D Mapping with Rayshader
Now that we know how to perform basic hillshading, we can begin the real fun part: making 3D maps. In rayshader, we do that simply by swapping out `plot_map()` with `plot_3d()`, and adding the heightmap to the function call. We don't want to re-compute the ambient_shade() layer every time we re-plot the landscape, so lets save it to a variable.
```{r plot3d}
ambientshadows = ambient_shade(hobart_mat)
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat, sunaltitude = 3, zscale = 33, lambert = FALSE), max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat, sunaltitude = 3, zscale = 33), max_darken = 0.7) %>%
add_shadow(ambientshadows, max_darken = 0.1) %>%
plot_3d(hobart_mat, zscale = 10,windowsize = c(1000,1000))
render_snapshot()
rgl::rgl.clear()
```
This opens up an `rgl` window that displays the 3D plot. Draw to manipulate the plot, and control/ctrl drag to zoom in and out. To close it, we can either close the window itself, or type in `rgl::rgl.close()`.
Just visualizing this on your screen is fun when exploring the data, but we would also like to export our figure to an image file. If you want to take a snapshot of the current view, rayshader provides the `render_snapshot()` function, which is useful for rmarkdown documents like this. If you use this without a filename, it will write and display the plot to the current device. With a filename, it will write the image to a PNG file in the local directory. For variety, let's also change the background/shadow color (arguments `background` and `shadowcolor`), depth of rendered ground/shadow (arguments `soliddepth` and `shadowdepth`), and add a title to the plot.
```{r titleshow}
hobart_mat %>%
sphere_shade() %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat, sunaltitude = 3, zscale = 33, lambert = FALSE), max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat, sunaltitude = 3, zscale = 33), max_darken = 0.7) %>%
add_shadow(ambientshadows, max_darken = 0) %>%
plot_3d(hobart_mat, zscale = 10,windowsize = c(1000,1000),
phi = 40, theta = 135, zoom = 0.9,
background = "grey30", shadowcolor = "grey5",
soliddepth = -50, shadowdepth = -100)
render_snapshot(title_text = "River Derwent, Tasmania",
title_font = "Helvetica",
title_size = 50,
title_color = "grey90")
render_snapshot(filename = "derwent.png")
#Delete the file
unlink("derwent.png")
```
If we want to programmatically change the camera, we can do so with the `render_camera()` function. We can adjust four parameters: `theta`, `phi`, `zoom`, and `fov` (field of view). Changing `theta` orbits the camera around the scene, while changing `phi` changes the angle above the horizon at which the camera is located. Here is a graphic demonstrating this relation (and [here's a link to the video from the presentation](https://www.tylermw.com/data/all.mp4) ):
![Figure 7: Demonstration of how ray_shade() raytraces images](images/spherical_coordinates_fixed.png)
`zoom` magnifies the current view (smaller numbers = larger magnification), and `fov` changes the field of view. Higher `fov` values correspond to a more pronounced perspective effect, while `fov = 0` corresponds to a orthographic camera.
Here's different views using the camera:
```{r cameramove}
render_camera(theta = 90, phi = 30, zoom = 0.7, fov = 0)
render_snapshot()
render_camera(theta = 90, phi = 30, zoom = 0.7, fov = 90)
render_snapshot()
render_camera(theta = 120, phi = 20, zoom = 0.3, fov = 90)
render_snapshot()
```
We can also use some post-processing effects to help guide our viewer through our visualization's 3D space. The easiest method of directing the viewer's attention is directly labeling the areas we'd like them to look at, using the `render_label()` function. This function takes the indices of the matrix coordinate area of interest `x` and `y`, and displays a `text` label at altitude `z`. We'll also use the `title_bar_color` argument to add a semi-transparent light bar behind our title to help it stand out from the background.
```{r labels}
#Windows? Turn off freetype.
if(.Platform$OS.type == "windows") {
freetype = FALSE
} else {
freetype = TRUE
}
render_label(hobart_mat, "River Derwent", textcolor ="white", linecolor = "white", freetype = freetype,
x = 450, y = 260, z = 1400, textsize = 2.5, linewidth = 4, zscale = 10)
render_snapshot(title_text = "render_label() demo, part 1",
title_bar_alpha = 0.8,
title_bar_color = "white")
render_label(hobart_mat, "Jordan River (not that one)", textcolor ="white", linecolor = "white", freetype = freetype,
x = 450, y = 140, z = 1400, textsize = 2.5, linewidth = 4, zscale = 10, dashed = TRUE)
render_snapshot(title_text = "render_label() demo, part 2",
title_bar_alpha = 0.8,
title_bar_color = "white")
```
We can replace all existing text with the `clear_previous = TRUE`, or clear everything by calling `render_label(clear_previous = TRUE)` with no other arguments.
```{r morelabels}
render_camera(zoom = 0.9, phi = 50, theta = -45,fov = 0)
render_snapshot()
render_label(hobart_mat, "Mount Faulkner", textcolor ="white", linecolor = "white", freetype = freetype,
x = 135, y = 130, z = 2500, textsize = 2, linewidth = 3, zscale = 10, clear_previous = TRUE)
render_snapshot()
render_label(hobart_mat, "Mount Dromedary", textcolor ="white", linecolor = "white", freetype = freetype,
x = 320, y = 390, z = 1000, textsize = 2, linewidth = 3, zscale = 10)
render_snapshot()
render_label(clear_previous = TRUE, freetype = freetype)
render_snapshot()
rgl::rgl.close()
```
This is not the only trick up our sleeve, though. Data visualization is not the first field to tackle the problem of "How do I direct a viewer's attention in 3D space, projected on a 2D screen?" Cinematographers encounter the same issue when they're filming (except when using small aperture sizes or lenses with short focal lengths). Below is a shot of Daniel Craig in the movie "Defiance":
![Figure 8: Daniel Craig in Defiance](images/good-selective-focus-in-defiance.jpg)
Even though the 3D space this shot is representing is fairly deep (with a distinct foreground, middle, and background), we know our attention is supposed to be focused on the man in the center. The use of a shallow depth of field tells our viewers what isn't important, and where their attention should be.
We can apply this same technique in rayshader using the `render_depth()` function. This applies a depth of field effect, as if our scene was being photographed by a real camera. Let's focus in on the river in the background. The `focus` parameter should be between 0 and 1, where 1 is the background and 0 is the foreground. The range of depths will depend on our camera position--calling the function with `preview_focus = TRUE` will print the range in the current view, and you can use the red line to see exactly where the focal plane will be positioned.
```{r depthoffield}
hobart_mat %>%
sphere_shade(sunangle = 60) %>%
add_water(detect_water(hobart_mat), color = "lightblue") %>%
add_shadow(ray_shade(hobart_mat, sunangle = 60, sunaltitude = 3, zscale = 33, lambert = FALSE), max_darken = 0.5) %>%
add_shadow(lamb_shade(hobart_mat, sunangle = 60, sunaltitude = 3, zscale = 33), max_darken = 0.7) %>%
add_shadow(ambientshadows, max_darken = 0.1) %>%
plot_3d(hobart_mat, zscale = 10,windowsize = c(1000,1000),
background = "#edfffc", shadowcolor = "#273633")
render_camera(theta = 120, phi = 20, zoom = 0.3, fov = 90)
render_depth(focus = 0.81, preview_focus = TRUE)
render_depth(focus = 0.9, preview_focus = TRUE)
render_depth(focus = 0.81)
```
This effect is rather subtle, so let's increase the focal length of the camera using argument `focallength`. This will make for a shallower depth of field (increase the blurring effect in areas far from the focal plane). We can also add a `title_*` arguments like in `render_snapshot()`, and we'll add a lens vignetting effect (which darkens the edges) by setting `vignette = TRUE`. This option is also available in `render_snapshot()` and `render_movie()`.
```{r moredepthoffield}
render_depth(focus = 0.81, focallength = 200, title_bar_color = "black", vignette = TRUE,
title_text = "The River Derwent, Tasmania", title_color = "white", title_size = 50)
rgl::rgl.close()
```
Rayshader can be used for more than just topographic data visualization--we can also easily visualize the intersection between land and water. Here, we'll use the built-in `montereybay` dataset, which includes both bathymetric and topographic data for Monterey Bay, California. We include a transparent water layer by setting `water = TRUE` in `plot_3d()`, and add lines showing the edges of the water by setting the `waterlinecolor` argument.
```{r water3d}
montereybay %>%
sphere_shade() %>%
plot_3d(montereybay, water = TRUE, waterlinecolor = "white",
theta = -45, zoom = 0.9, windowsize = c(1000,1000),zscale = 50)
render_snapshot(title_text = "Monterey Bay, California",
title_color = "white", title_bar_color = "black")
```
By default, `plot_3d()` sets the water level at 0 (sea level), but we can change this, either by adjusting `waterdepth` in our call to `plot_3d()`, or by calling the `render_water()` function after the fact. Here, we make the water slightly less transparent with the `wateralpha` argument (`wateralpha = 1` is opaque, `wateralpha = 0` is transparent).
```{r changewater}
render_water(montereybay, zscale = 50, waterdepth = -100,
waterlinecolor = "white", wateralpha = 0.7)
render_snapshot(title_text = "Monterey Bay, California (water level: -100 meters)",
title_color = "white", title_bar_color = "black")
render_water(montereybay, zscale = 50, waterdepth = 30,
waterlinecolor = "white", wateralpha = 0.7)
render_snapshot(title_text = "Monterey Bay, California (water level: 30 meters)",
title_color = "white", title_bar_color = "black")
rgl::rgl.close()
```
Although our underlying matrix data is rectangular, we aren't limited to rectangular 3D plots. Any entry in the matrix that has a value of `NA` will be sliced out of the visualization. Here, we slice out just the undersea data from the `montereybay` dataset. We still use the full matrix to calculate the hillshade and shadows, but pass the sliced elevation matrix to `plot_3d()`:
```{r removeland3d}
mont_bathy = montereybay
mont_bathy[mont_bathy >= 0] = NA
montereybay %>%
sphere_shade() %>%
add_shadow(ray_shade(mont_bathy,zscale = 50, sunaltitude = 15, lambert = FALSE),0.5) %>%
plot_3d(mont_bathy, water = TRUE, waterlinecolor = "white",
theta = -45, zoom = 0.9, windowsize = c(1000,1000))
render_snapshot(title_text = "Monterey Bay Canyon",
title_color = "white",
title_bar_color = "black")
rgl::rgl.clear()
```
To slice out just the land, we'll just swap the comparison:
```{r removewater3d}
mont_topo = montereybay
mont_topo[mont_topo < 0] = NA
montereybay %>%
sphere_shade() %>%
add_shadow(ray_shade(mont_topo, zscale = 50, sunaltitude = 15, lambert = FALSE),0.5) %>%
plot_3d(mont_topo, shadowdepth = -50,
theta = 135, zoom = 0.9, windowsize = c(1000,1000))
render_snapshot(title_text = "Monterey Bay (sans water)",
title_color = "white",
title_bar_color = "black")
rgl::rgl.clear()
```
Alternatively, if we just want a more interesting base shape and we don't mind losing some data, we can have rayshader carve your dataset into either a hexagon or a circle by specifying the `baseshape` argument in `plot_3d()` (I also throw in some new background colors, shadow colors, and the vignette effect for variety):
```{r baseshape}
montereybay %>%
sphere_shade() %>%
add_shadow(ray_shade(montereybay,zscale = 50,sunaltitude = 15,lambert = FALSE),0.5) %>%
plot_3d(montereybay, water = TRUE, waterlinecolor = "white", baseshape = "hex",
theta = -45, zoom = 0.7, windowsize = c(1000,1000),
shadowcolor = "#4e3b54", background = "#f7e8fc")
render_snapshot(title_text = "Monterey Bay Canyon, Hexagon", vignette = TRUE,
title_color = "white", title_bar_color = "black", clear = TRUE)
montereybay %>%
sphere_shade() %>%
add_shadow(ray_shade(montereybay,zscale = 50,sunaltitude = 15,lambert = FALSE),0.5) %>%
plot_3d(montereybay, water = TRUE, waterlinecolor = "white", baseshape = "circle",
theta = -45, zoom = 0.7, windowsize = c(1000,1000),
shadowcolor = "#4f3f3a", background = "#ffeae3")
render_snapshot(title_text = "Monterey Bay Canyon, Circle",
title_color = "white", title_bar_color = "black", clear = TRUE)
```
Taking snapshots of our map is useful, but a static visualization isn't the ideal method to represent our 3D data. For the viewer, the depth variable has to be inferred from context, which can be ambiguous and easily misinterpreted. A far better form for our visualization is a movie/animation--we can guide our viewers on a 3D tour of our data, which can resolve many visual ambiguities. Rayshader makes this easy with the `render_movie()` function, which creates a movie (mp4 file) which can easily be shared on social media. You can add titles and all of the same arguments you can to `render_snapshot()`.
By default, `render_movie()` has two animations built in: a basic orbit `type = "orbit"`, an oscillating animation `type = "oscillate"`. The `frames` argument is the number of frames, while the `fps` is the frames per second. The total length of the video is frames/fps. Generally, you shouldn't change from either 30 or 60 frames per second if you want to share your videos on social media. You also shouldn't make the camera move too fast or spin too quickly--fast camera movements can cause nausea and aren't very interpretable.
```{r rendermovie}
montereybay %>%
sphere_shade() %>%
plot_3d(montereybay, water = TRUE, waterlinecolor = "white",
theta = -45, zoom = 0.9, windowsize = c(600,600))
#Orbit will start with current setting of phi and theta
render_movie(filename = "montbay.mp4", title_text = 'render_movie(type = "orbit")',
phi = 30 , theta = -45)
render_movie(filename = "montbayosc.mp4", phi = 30 , theta = -90, type = "oscillate",
title_text = 'render_movie(type = "oscillate")', title_color = "black")
unlink("montbay.mp4")
unlink("montbayosc.mp4")
```
You can also set `type = "custom"` and pass in a vector of camera values to the `phi`, `theta`, `zoom`, and `fov` arguments, and each value will be treated as a single frame in our animation. Here I've also provided an "easing" function that smoothly transitions between two values, so we can zoom in and out without jarring changes.
```{r custommovie, fig.width = 3, fig.height = 3}
ease_function = function(beginning, end, steepness = 1, length.out = 180) {
single = (end) + (beginning - end) * 1/(1 + exp(seq(-10, 10, length.out = length.out)/(1/steepness)))
single
}
zoom_values = c(ease_function(1,0.3), ease_function(0.3,1))
#This gives us a zoom that looks like this:
ggplot(data.frame(x = 1:360,y = zoom_values),) +
geom_line(aes(x = x,y = y),color = "red",size = 2) +
ggtitle("Zoom value by frame")
render_movie(filename = "montbaycustom.mp4", type = "custom",
phi = 30 + 15 * sin(1:360 * pi /180),
theta = -45 - 1:360,
zoom = zoom_values)
rgl::rgl.close()
unlink("montbaycustom.mp4")
```
If you want to do something more advanced, like vary the water depth between each frame or call `render_depth()` to create a depth of field effect, you'll need to generate the movie manually in a `for` loop. Luckily, this is fairly simple using the `av` package. Let's vary the water level from the most recent ice age minimum of -130 meters to the current sea level. We use the `av` package to combine them into a movie.
```{r advancedmovie}
# montereybay %>%
# sphere_shade(texture = "desert") %>%
# plot_3d(montereybay, windowsize = c(600,600),
# shadowcolor = "#222244", background = "lightblue")
#
# render_camera(theta = -90, fov = 70,phi = 30,zoom = 0.8)
#
# for(i in 1:60) {
# render_water(montereybay, zscale = 50, waterdepth = -60 - 60 * cos(i*pi*6/180),
# watercolor = "#3333bb",waterlinecolor = "white", waterlinealpha = 0.5)
# render_snapshot(filename = glue::glue("iceage{i}.png"), title_size = 30, instant_capture = TRUE,
# title_text = glue::glue("Sea level: {round(-60 - 60 *cos(i*pi*6/180),1)} meters"))
# }
#
# av::av_encode_video(glue::glue("iceage{1:60}.png"), output = "custom_movie.mp4",
# framerate = 30)
#
# rgl::rgl.close()
#
# unlink(glue::glue("iceage{1:60}.png"))
# unlink("custom_movie.mp4")
```
Finally, I'm going to show you a really cool feature I've just recently implemented: `render_highquality()`. This calls a powerful rendering engine built-in to rayshader to create truly stunning 3D visualizations. You can control the view entirely through the rgl window--`render_highquality()` recreates the scene using the `rayrender` package and returns a beautiful, pathtraced rendering with realistic shadows and lighting. You can adjust the light(s) angle, direction, color, and intensity, as well as add additional lights and objects to the scene.
```{r renderhighqual, fig.width = 4, fig.height = 4}
hobart_mat %>%
sphere_shade(texture = "desert") %>%
add_water(detect_water(hobart_mat), color = "desert") %>%
plot_3d(hobart_mat, zscale = 10)
render_highquality()
```
Notice we no longer need any of the shadow generating functions--the shadows are calculated automatically as part of the rendering process. You can adjust the light direction, or even add your own lights using `rayrender`--it's an extremely powerful tool to create visualizations. You can decrease the noise by setting the `samples` argument to a higher number in `render_highquality()` (default is 100).
Adding a custom light:
```{r renderhighquallight, fig.width = 4, fig.height = 4}
# `sphere()` and `diffuse()` are rayrender functions
render_highquality(light = FALSE, scene_elements = sphere(y = 150, radius = 30,material = diffuse(lightintensity = 40, implicit_sample = TRUE)))
rgl::rgl.close()
```
Visualizing a 3D model with water:
```{r renderhighqualwater, fig.width = 4, fig.height = 4}
montereybay %>%
sphere_shade() %>%
plot_3d(montereybay, zscale = 50, water = TRUE)
render_camera(theta = -45, zoom = 0.7, phi = 30,fov = 70)
render_highquality(lightdirection = 100, lightaltitude = 45, lightintensity = 800,
clamp_value = 10, title_text = "Monterey Bay, CA",
title_color = "white", title_bar_color = "black")
rgl::rgl.close()
```
# Elevation Data Sources
Here are some sources for downloading elevation/bathymetry data. I have included several how-to PDF guides as well.
## Philadelphia Guide
https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/philly.pdf
Alternate: https://dl.dropboxusercontent.com/s/5ls2gfjssosr2pa/philly.pdf?dl=0
Source:
https://maps.psiee.psu.edu/preview/map.ashx?layer=2021
## Florida Guide
https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/florida.pdf
Alternate: https://dl.dropboxusercontent.com/s/adf1dsmytnw9f8c/florida.pdf
Source:
http://dpanther2.ad.fiu.edu/Lidar/lidarNew.php
## Texas Guide
https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/texas.pdf
Alternate: https://dl.dropboxusercontent.com/s/89xs8yaet8s0x35/texas.pdf
Source:
https://tnris.org/stratmap/elevation-lidar/
## SRTM/OpenTopography Global Elevation Data Guide
https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/srtm_global.pdf
Alternate: https://dl.dropboxusercontent.com/s/zu46o6ilzejy6gx/srtm_global.pdf?dl=0
Source:
http://opentopo.sdsc.edu/raster?opentopoID=OTSRTM.082015.4326.1
## GEBCO Global Bathymetry Data Guide
https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/gebco_global_bathymetry.pdf
Alternate: https://dl.dropboxusercontent.com/s/i0regel891xajz3/gebco_global_bathymetry.pdf
Source:
https://download.gebco.net
# Mapping the Effects of Rising Sea Levels
How are coastal communities impacted by rising sea levels? That's one of the most pressing problems presented by our changing climate. Actually showing how communities are affected by rising sea levels by visualizing the water level directly is, in my opinion, far more impactful than just discussing the numbers. Here, we're going to combine lidar data from Miami Beach and predictions about sea level rise to show how that community will be affected.
We're not just going to take into account sea level rise--climate change results in warmer oceans, which in turn results in more powerful hurricanes. While gradual sea level rise might be countered by levees and seawalls, these hurricanes bring storm surges that can completely overwhelm and inundate coastal cities. And since Miami is directly in the predicted path of many hurricanes, it's important we take that into account.
Here's a link to a Google Maps view of the region we're going to analyze:
https://www.google.com/maps/place/Miami+Beach,+FL/@25.7438709,-80.1337059,1428a,35y,345.64h,57.55t/data=!3m1!1e3!4m5!3m4!1s0x88d9a6172bfeddb9:0x37be1741259463eb!8m2!3d25.790654!4d-80.1300455
We're going to model the most extreme scenario: 10.7 ft sea level rise by the year 2100, with a 9 ft storm surge. Our data is in the Florida East State Plane coordinate system, so the elevation is in feet. I pulled the following two datasets from this website (follow the [guide for Florida](https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/florida.pdf) to download your own data):
http://dpanther2.ad.fiu.edu/Lidar/lidarNew.php
Now, let's load some lidar data into R. We'll do this using the `whitebox` package, which has several functions for manipulating and transforming lidar data, among many other useful features. Many lidar datasets are far too large to easily visualize, and are often noisy--we are going to use `whitebox` to clean the data, and turn it into a simple elevation model that includes buildings. This can take a minute, so I've commented out the below code--you can run it if you want.
```{r miamiprocess}
## Download this lidar data
# download.file("https://dl.dropboxusercontent.com/s/hkdxt1zbsjp68jl/LID2007_118755_e.zip",destfile = "LID2007_118755_e.zip")
# download.file("https://dl.dropboxusercontent.com/s/omvb63urfby6ddb/LID2007_118754_e.zip",destfile = "LID2007_118754_e.zip")
##Alternate links
# download.file("https://www.tylermw.com/data/LID2007_118755_e.zip",destfile = "LID2007_118755_e.zip")
# download.file("https://www.tylermw.com/data/LID2007_118754_e.zip",destfile = "LID2007_118754_e.zip")
#
# unzip("LID2007_118755_e.zip")
# unzip("LID2007_118754_e.zip")
#
# whitebox::wbt_lidar_tin_gridding(here::here("LID2007_118755_e.las"),
# output = here::here("miamibeach.tif"),
# resolution = 1, verbose_mode = TRUE,
# exclude_cls = '3,4,5,7,13,14,15,16,18')
#
# whitebox::wbt_lidar_tin_gridding(here::here("LID2007_118754_e.las"),
# output = here::here("miamibeach2.tif"),
# resolution = 1, verbose_mode = TRUE,
# exclude_cls = '3,4,5,7,13,14,15,16,18')
```
While this runs, let's talk about general limitations you'll encounter when dealing with lidar data. If you notice, you'll see `whitebox` actually isn't loading any data into your environment, and for good reason. R is memory hungry, and processing a full lidar dataset would be difficult on most standard machines. The `whitebox` package operates on files outside of R, and outputs a processed file when it's done. Once the data is transformed from a collection of lidar point measurements to a regular grid of elevation values, we can load it into R and get to work.
![Figure 9: Lidar data arrives as a collection of points--it has to be transformed to a regular grid.](images/lidar.png)
We're using whitebox to remove several categories of points in the data--here's a table of what these numbers (passed in argument `exclude_cls`) are referring to:
| CLS | Classification |
|-----|--------------------------------------|
| 0 | Never classified |
| 1 | Unassigned |
| 2 | Ground |
| 3 | Low Vegetation |
| 4 | Medium Vegetation |
| 5 | High Vegetation |
| 6 | Building |
| 7 | Low Point |
| 9 | Water |
| 10 | Rail |
| 11 | Road Surface |
| 13 | Wire - Guard (Shield) |
| 14 | Wire - Conductor (Phase) |
| 15 | Transmission Tower |
| 16 | Wire-Structure Connector (Insulator) |
| 17 | Bridge Deck |
| 18 | High Noise |
Let's download the pre-processed tifs, if you aren't generating them from the lidar data:
```{r miamidownload}
download.file("https://dl.dropboxusercontent.com/s/rwajxbdwtkcw50c/miamibeach.tif", destfile = "miamibeach.tif")
download.file("https://dl.dropboxusercontent.com/s/39abkh87h05n7rm/miamibeach2.tif", destfile = "miamibeach2.tif")
## Alternate Links
# download.file("https://www.tylermw.com/data/miamibeach.tif", destfile = "miamibeach.tif")
# download.file("https://www.tylermw.com/data/miamibeach2.tif", destfile = "miamibeach2.tif")
```
We'll load them in with the raster package, and then merge to the two together. Because they came from the same source, they have the same coordinate system, so the merge here happens seamlessly. We then convert the raster to a matrix.
```{r miamiload, fig.width=6,fig.height=3}
miami1 = raster::raster("miamibeach.tif")
miami2 = raster::raster("miamibeach2.tif")
miami_combined = raster::merge(miami1, miami2)
miami_beach = raster_to_matrix(miami_combined)
dim(miami_beach)
```
This is very large! Too large to easily visualize. Let's downsample the data by a factor of 4.
```{r miamimapbad, fig.width=6,fig.height=3}
# 1/4th the size, so 0.25 for the second argument
miami_beach_small = reduce_matrix_size(miami_beach, 0.25)
## Backup:
#download.file("https://www.tylermw.com/data/miami_beach_small.Rds",destfile = "miami_beach_small.Rds")
#miami_beach_small = readRDS("miami_beach_small.Rds")
dim(miami_beach_small)
miami_beach_small %>%
sphere_shade(texture = "desert") %>%
add_water(detect_water(miami_beach_small,zscale = 4)) %>%
add_shadow(ray_shade(miami_beach_small, zscale = 4, multicore = TRUE,
sunaltitude = 10, sunangle = -110),0.3) %>%
plot_map()
```
Detect water didn't work great here. This is because the lidar is actually returning the water surface--you can see ripples and waves where the water should be. It's not perfectly flat, so we need to adjust the parameters to get it to work.
```{r miamimapgood, fig.width=6,fig.height=3}
miami_beach_small %>%
sphere_shade(texture = "desert") %>%
add_water(detect_water(miami_beach_small, cutoff=0.2, zscale=4,
min_area = length(miami_beach_small)/150,
max_height = 3)) %>%
add_shadow(ray_shade(miami_beach_small, zscale = 4, multicore = TRUE,
sunaltitude = 10, sunangle = -110),0.3) %>%
plot_map()
```
We should slice this map down to only the region of interest--Miami Beach proper. We can do this with the `crop` function in the `raster` package. We first create an `extent` object, which specifies the boundaries of our map. We then convert the boundaries from lat/long to the coordinate system of our lidar data, which is the Florida East State Plane. I have provided a function here, `lat_long_to_other`, that performs this transformation. You can use it with other coordinate systems as well--you will just need to provide the "EPSG" code of the desired coordinate system.
To figure out the desired latitudes and longitude boundaries for Miami Beach, I went to Google Maps and double clicked on the points I wanted to be the bottom left and top right of our map. Google Maps will display the longitude and latitude at the bottom, which you can then just copy and paste.
```{r miamicropped}
lat_long_to_other = function(lat,long, epsg_code) {
data = data.frame(long=long, lat=lat)
coordinates(data) <- ~ long+lat
#Input--lat/long
proj4string(data) <- CRS("+init=epsg:4326")
#Convert to coordinate system specified by EPSG code
xy = data.frame(spTransform(data, CRS(paste0("+init=epsg:", epsg_code))))
colnames(xy) = c("x","y")
return(unlist(xy))
}
# Florida East State Plane EPSG code is 2236
# I grabbed these latitudes and longitudes from double clicking on google maps
bottomleft = lat_long_to_other(25.763675, -80.142499, 2236)
topright = lat_long_to_other(25.775562, -80.127569, 2236)
# Create an extent object:
e = extent(c(bottomleft[1], topright[1], bottomleft[2], topright[2]))
# Use that extent object to crop the original grid to our desired area
crop(miami_combined, e) %>%
raster_to_matrix() %>%
reduce_matrix_size(0.25) ->
miami_cropped
## Backup
#download.file("https://www.tylermw.com/data/miami_cropped.Rds",destfile = "miami_cropped.Rds")
#miami_cropped = readRDS("miami_cropped.Rds")
## R 3.5 and below:
#download.file("https://www.tylermw.com/data/miami_cropped_oldr.Rds",destfile = "miami_cropped_oldr.Rds")
#miami_cropped = readRDS("miami_cropped_oldr.Rds")
miami_cropped %>%
sphere_shade(texture = "desert") %>%
add_water(detect_water(miami_cropped, cutoff = 0.3,
min_area = length(miami_cropped)/20,
max_height = 2)) %>%
add_shadow(ray_shade(miami_cropped, zscale = 4, multicore = TRUE,
sunaltitude = 30, sunangle = -110),0.3) %>%
plot_map()
```
Cropped and ready. Now that we have our data prepped and reduced, let's visualize it in 3D. The coordinate system here is defined in feet, so that's what we'll enter for the values of waterdepth.
```{r miamibeachwater}
miami_cropped %>%
sphere_shade(texture = "desert") %>%
add_shadow(ray_shade(miami_cropped, zscale = 4, multicore = TRUE),0.3) %>%
plot_3d(miami_cropped, zscale = 4, water = TRUE, waterdepth = 0,
zoom=0.85, windowsize = 1000,
background = "grey50", shadowcolor = "grey20")
render_camera(phi = 45,fov = 70,zoom = 0.45,theta = 25)
render_snapshot(title_text = "Modern Day Miami Beach, Sea Level: 0 feet",
title_bar_color = "black",
title_color = "white", vignette = 0.2)
render_camera(phi = 45,fov = 70,zoom = 0.44,theta = 20)
render_water(miami_cropped,zscale = 4,waterdepth = 3)
render_snapshot(title_text = "Miami Beach, Sea Level: 3 feet",
title_bar_color = "black",
title_color = "white", vignette = 0.2)
render_camera(phi = 45,fov = 70,zoom = 0.42,theta = 10)
render_water(miami_cropped,zscale = 4,waterdepth = 7)
render_snapshot(title_text = "Miami Beach, Sea Level: 7 feet (Max pred. 2100 sea level rise)",
title_bar_color = "black",
title_color = "white", vignette = 0.2)
render_camera(phi = 45,fov = 70,zoom = 0.41,theta = 5)
render_water(miami_cropped,zscale = 4,waterdepth = 16)
render_snapshot(title_text = "Miami Beach, Sea Level: 16 feet (Max sea level rise + 9 ft storm surge)",
title_size = 30,
title_bar_color = "black",
title_color = "white", vignette = 0.2)
rgl::rgl.close()
```
Even with only a few feet of sea level rise, what we now consider to be extreme flooding events will become far more common, and many areas by the coast will become unlivable. We focused here on a relatively wealthy part of the nation--visualizing sea level rise in well-known areas brings out-sized attention to the problem--but this type of analysis is important to perform for communities all along the coast. I encourage you to grab elevation/lidar data for smaller coastal communities and see how rising sea levels could affect them.
# Mapping the Shadows of Philadelphia
Now let's use rayshader to perform an analysis using what it does best: calculating shadows! Back in 2015, the New York Times had a fantastic piece by Quoctrung Bui and Jeremy White on mapping the shadows of New York City.
https://www.nytimes.com/interactive/2016/12/21/upshot/Mapping-the-Shadows-of-New-York-City.html
They took three days in the year and mapped out how much each spot in the city spent in shadow: the winter solstice (December 21st), the summer solstice (June 21st), and the autumnal equinox (Sept 22nd). They worked with researchers at the Tandon school of Engineering at NYU to develop a raytracer to perform these calculations. We don't have access to that tool, but we have lidar data and rayshader--that's all we'll need. This type of analysis is particularly important to assess the impact of so-called "pencil towers" that have gone up in Manhattan--extremely tall, skinny skyscrapers lining Central Park and catering to the super wealthy. How do tall buildings affect sunlight, a shared common good and scarce resource in densely packed urban areas?
We are going to perform and visualize a similar analysis for a hypothetical skyscraper in West Philadelphia, using lidar data from Pennsylvania.
Let's start by loading some lidar data into R. Here's a link to the PDF file with instructions on how to load data for Philly.
https://github.com/tylermorganwall/MusaMasterclass/raw/master/data_source_guides/philly.pdf
Let's walk through downloading a specific dataset:
https://maps.psiee.psu.edu/preview/map.ashx?layer=2021
Here, we're going to load our lidar dataset of Penn Park, right by the Schuylkill. We're going to again use the whitebox package to prep the data.
```{r phillyprocess}
#291.3 MB
# download.file("https://dl.dropboxusercontent.com/s/2jge03ptvsley62/26849E233974N.zip", destfile = "26849E233974N.zip")
## Alternate Link
# download.file("https://www.tylermw.com/data/26849E233974N.zip", destfile = "26849E233974N.zip")
unzip("26849E233974N.zip")
whitebox::wbt_lidar_tin_gridding(here::here("26849E233974N.las"),
output = here::here("phillydem.tif"), minz = 0,
resolution = 1, exclude_cls = '3,4,5,7,13,14,15,16,18')
```
Now, let's load the data into R.
```{r loadphilly}
# backups in case whitebox doesn't work for you
download.file("https://dl.dropboxusercontent.com/s/3auywgq93lokurf/phillydem.tif", destfile = "phillydem.tif")
## Alternate Link
# download.file("https://www.tylermw.com/data/phillydem.tif", destfile = "phillydem.tif")
phillyraster = raster::raster("phillydem.tif")
building_mat = raster_to_matrix(phillyraster)
```
A 2640x2640 matrix is easily processed in R, but it's not ideal for developing a visualization. Large matrices (greater than about 2000x2000, but depends on your machine) take a while to display in 3D. It's faster to prototype your visualizations using a smaller dataset--once you are happy with the end result, you can re-run the script with the original hi-res data. We're time limited and our visualization doesn't require a high resolution dataset, so we won't use one. Let's reduce it by a factor of 2.
```{r reducephilly}
## Alternate Link
# download.file("https://www.tylermw.com/data/building_mat_small.Rds", destfile = "building_mat_small.Rds")
# building_mat_small = readRDS("building_mat_small.Rds")
building_mat_small = reduce_matrix_size(building_mat, 0.5)
dim(building_mat_small)
```