forked from ubc-geomatics-textbook/geomatics-textbook
-
Notifications
You must be signed in to change notification settings - Fork 0
/
12-remote-sensing-systems.Rmd
569 lines (305 loc) · 35.5 KB
/
12-remote-sensing-systems.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
# Remote Sensing Systems {#chapter-template}
You probably know that you are using your very own organic remote sensing system to read this sentence. Our eyes take in information from the world around us by detecting changes in light and relaying that information through the optic nerve into our brains, where we make sense of what we are seeing. As you learned in Chapter 11, this is what constitutes remote sensing - gathering information (“sensing”) without directly measuring or interacting with that information (“remote”). Whereas our eyes are limited to the visible light portion of the electromagnetic spectrum and by the location of our bodies, remote sensing systems use powerful sensors and flight-equipped platforms to paint a broader and deeper picture of the world around us. The picture in figure \@ref(fig:12-GOES-1-earth) is an example of the beautiful imagery we can capture from space, taken from the GOES-1 satellite.
<br>
```{r 12-GOES-1-earth, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("North and South America as seen from the NASA GOES-1 satellite. Captured from KeepTrack.space via NASA. [Copyright (C) 2007 Free Software Foundation, Inc.](https://fsf.org/)")
knitr::include_graphics(here::here("images", "12-GOES_1_earth.png"))
```
<br>
Remote sensing systems range in size and complexity from a handheld camera to the Hubble telescope and capture images of areas ranging from a few metres to several kilometres in size. Though devices such as microscopes, x-ray machines, and handheld radios are technically remote sensing systems, the field of remote sensing typically refers to observing Earth on a small spatial scale (1:100 to 1:250,000).
<br>
:::: {.box-content .call-out-content}
::: {.box-title .call-out-top}
#### NOTE {-}
:::
<p id="box-text">
Remember, in spatial scale, “small” means a big picture. If you want a refresher on how to read and understand map scales, check out section 2.5 in Chapter 2.
</p>
::::
<br>
The range of uses for remote sensing platforms are dazzling in number, allowing us to monitor severe weather events, ocean currents, land cover change, natural disturbances, forest health, surface temperature, cloud cover, urban development, and so much more with high precision and accuracy. In this chapter, we will break down the how and where of remote sensing systems and discover a few different systems used for Earth observation today.
<br>
:::: {.box-content .your-turn-content}
::: {.box-title .your-turn-top}
#### Your turn! {-}
:::
<p id="box-text">
What other “remote sensing systems” can you think of that you use day-to-day?
</p>
::::
<br>
:::: {.box-content .learning-objectives-content}
::: {.box-title .learning-objectives-top}
#### Learning Objectives {-}
:::
1. Break down remote sensing technology into its basic components
2. Understand how different settings and parameters impact remote sensing system outcomes
3. Review the key remote sensing systems used in Canada and around the world for environmental management
::::
<br>
### Key Terms {-}
Absorption, aerial, curvature, focus, geostationary, hyperspectral, LiDAR, multispectral, nadir, oblique, orbit, panchromatic, radiometric, reflection, refraction, resolution, spectral, sun-synchronous, thermal
## Optical System Basics
Remote sensing systems contain a number of common components and operate using similar principles despite their difference in looks. In the next section, we will discover the technical specs of remote sensing systems that allow them to “see” the images all around us.
### Lenses
Picture the view from a window onto a busy street on a rainy day: you have cars driving by with headlights, traffic lights reflecting off a wet road, raindrops pouring down the windowpane and distorting the view, and hundreds of people and objects on the street scattering light beams in every possible direction from a huge range of distances. In order for us to take in any of this, these light beams need to reach the retina, the photosensitive surface at the very back of the eye.How do our relatively tiny eyeballs take in all that disparate light and produce crystal clear images for our brains? By using one of the most basic components of any optical system: a lens. A lens is a specially shaped piece of transparent material that, when light passes through it, changes the shape and direction of light waves in a desired way.
<br>
:::: {.box-content .call-out-content}
::: {.box-title .call-out-top}
#### NOTE {-}
:::
<p id="box-text">
The property of transparent mediums to change the direction of light beams is called **refraction**, or transmittance. This is why objects in moving water look all wibbly and misshapen. The arrangement of molecules within a medium disrupts both the direction and speed of the photons – the measurement of this disruption is called the refractive index.
</p>
::::
<br>
The lens at the front of your eyeball changes refracts light beams from varying distances precisely onto your retina. The optical systems on remote sensing platforms are the same: they use a specially designed lens to focus light beams at the desired distance to onto their own recording medium. Below in figure \@ref(fig:12-focus-example-humaneye) is a simple visualization of how optical systems focus light onto a desired point to produce an in-focus image.
<br>
```{r 12-focus-example-humaneye, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Selected material from [How Focus Works](https://www.bhphotovideo.com/explora/photography/tips-and-solutions/how-focus-works). Copyright Todd Vorenkamp. Used with permission.)")
knitr::include_graphics(here::here("images", "12-focus_example_humaneye.gif"))
```
<br>
Now, picture another scene: you are scrolling through social media and you see a beautiful photo of Mt Assiniboine taken by your friend, a professional photographer based in the Canadian Rockies. You think to yourself, “Wow, that peak looks ENORMOUS! I want to visit there and see it for myself.” So, you ask your friend exactly where they went, drive to Banff National Park, hike to the very same spot, and squint skywards. Hmm…though the mountain is still imposing, it is certainly not towering over you at close range as it was in the photo. You also notice that there are several surrounding peaks that you couldn’t see before. Your friend’s picture was crystal-clear, and you have 20/20 vision, so you know its not an issue with focusing properly. Are you being deceived?
Actually, yes, you are – by both your eyes *and* the camera your friend used. We like to think that our eyes show us the world as it truly is and that everything else is a facsimile, but in truth, all optical systems alter the scenes around us to show us what we need to see. From an evolution standpoint, you can see why clear resolution of close-range objects would be of vital importance for humans – think distinguishing edible plants from poisonous ones, hunting prey, reading facial expressions, etc. We can make out human-sized objects up to a distance of three kilometres in good lighting (https://www.livescience.com/33895-human-eye.html), but if you are interested in seeing something far away, such as a mountainside or a celestial body, you’ll have to trade in your natural close-range viewing abilities for a system specialized for distant details – e.g., binoculars or a telescope. The distance at which objects can be resolved and how they appear in an image lies with the lens. Read on below to learn about how different lens designs influence the appearance of a scene or object, and keep in mind how these designs may be used in various earth observation applications.
Most, if not all, lenses on optical systems for remote sensing are **spherical lenses**, called that because each side of the lens is spherical in shape, similar to a bowl. A **convex** optical surface curves outward from the lens centre, whereas a **concave** optical surface curves inward toward the lens centre. Though not spherical, a **planar** or flat optical surface may be used as well. A spherical lens is formed by joining two optical surfaces – concave, convex, and/or planar – back-to-back. A **biconvex** or positive lens is two convex surfaces, and a **biconcave** or negative lens is - you guessed it - two concave surfaces. The *radius of curvature* is the measure of how much an optical surface “bulges” or “caves”. If you imagine tracing the edge of the surface in an arc and continuing the curve all the way around in a circle, the radius of this imagined circle would be the radius of curvature. Biconvex and biconcave lenses can be “equiconvex”, meaning they have the same spherical curvature on each side, but may also have uneven curvatures. The lens in the human eye is an example of a lens with uneven curvatures – our radius of curvature is higher at the front. Figures \@ref(fig:12-RoC-convex) and \@ref(fig:12-RoC-concave) demonstrate how the radius of curvature is measured for both concave and convex optical surfaces.
<br>
```{r 12-RoC-convex, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Measuring the radius of curvature for a convex optical surface. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-ROC_convex.png"))
```
<br>
```{r 12-RoC-concave, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Measuring the radius of curvature for a concave optical surface. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-ROC_concave.png"))
```
<br>
:::: {.box-content .your-turn-content}
::: {.box-title .your-turn-top}
#### Your turn! {-}
:::
<p id="box-text">
What is the radius of curvature for a perfectly flat lens? See answer at end of chapter.
</p>
::::
<br>
### Focal Length
So, now that we know a little bit about what lenses look like, its time to figure out exactly how influence an image’s journey from the front of the lens to the recording medium. As you might expect, different combinations of optical surfaces and radii of curvature will behave in different ways. Remember that for an image to be in-focus, we need to ensure the light beams are landing precisely on the recording medium or screen.
**Convex optical surfaces cause light beams to *converge*, or focus, to a point behind the lens.**
**Concave lenses cause light beams to *diverge*, or spread out, resulting in the light appearing to converge (focus) to a point in front of the lens.**
The point where the light converges or appears to converge is called the *focal point*, and the distance between the focal point and the centre of the lens is called the *focal length*. For a converging lens, the focal length is positive; for a diverging lens, it is negative. Figures \@ref(fig:12-convex-focal-diagram.png) and \@ref(fig:12-concave-focal-diagram.png) illustrate the behaviour of light when travelling through a biconvex and biconcave lens. See the paragraph below the diagrams for variable labels.
<br>
```{r 12-convex-focal-diagram, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Measurements in a biconvex lens. [User:DrBob](https://en.wikipedia.org/wiki/User:DrBob). [CC BY 3.0 Unported](https://creativecommons.org/licenses/by/3.0/)")
knitr::include_graphics(here::here("images", "12-convex_focal_diagram.png"))
```
<br>
```{r 12-concave-focal-diagram, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Measurements in a biconcave lens. [User:DrBob](https://en.wikipedia.org/wiki/User:DrBob). [CC BY 3.0 Unported](https://creativecommons.org/licenses/by/3.0/)")
knitr::include_graphics(here::here("images", "12-concave_focal_diagram.png"))
```
<br>
The optical power of a lens – the degree to which it can converge or diverge light – is the reciprocal of focal length. Essentially, a “powerful” lens will be able to refract light beams at sharper angles from the horizontal, causing them to converge or appear to converge closer to the lens, i.e., at a smaller focal length. The **Lensmaker’s Equation** (Equation 1) allows us to calculate the focal length ($f$) and/or optical power ($\frac{1}{f}$) as a function of the radii of curvature ($R$), the thickness of the lens between the optical surfaces ($d$), and the refractive index of the lens material ($n$). Note that $R_1$ is the front surface - the side of the lens closest to the origin of the light - and $R_2$ is the back surface.
Equation 1: \[\frac{1}{f} = (n-1)[\frac{1}{R_1} - \frac{1}{R_2} + \frac{(n-1)d}{nR_1R_2}]\]
[This Wolfram-Alpha demonstration](https://demonstrations.wolfram.com/LensmakersEquation/#embed) allows you to manually adjust the parameters of the Lensmaker's Equation and see the outputs.
We know how lenses impact focal length, but how does focal length impact a photo? Let us return to our scenario in Banff National Park where we have two mismatched images of the same mountain. You’ve done some investigating and found that the camera your friend used is very large (and expensive). The lens at the front is quite far from the recording medium in the body of the camera – many times the distance between your own eye lens and recording medium, the retina. This difference in focal length is the cause of the differing images. A low optical power lens with a long focal length will have high magnification, causing distant objects to appear larger and narrowing the field of view (see section 12.something). A high optical power lens with a short focal length, such as your eye, will have low magnification and a larger field of view by comparison. Mystery solved!
<br>
:::: {.box-content .your-turn-content}
::: {.box-title .your-turn-top}
#### Your turn! {-}
:::
<p id="box-text">
When you return from your trip, your friend decides to test you on your new skills and shows you these additional photos they took of Mt Assiniboine in nearly identical spots on the same day. They ask you which photo was taken with a longer camera lens. How can you know?
*Try this*: from the peak of Mt Assiniboine (the very big one), draw a line straight downwards or cover half of the photo with a piece of paper, and then do the same for the other photo. Does the line or paper edge intersect at the same points of the foreground in each photo? Can you see the same parts of the mountains in the foreground? Use the rock and snow patterns for reference. If the cameras were the same focal length, even with different cropping and lighting as seen here, the answers should both be yes. Can you tell which photo was taken with a 67mm lens and which was taken with a 105mm lens? See the answers at the end of the chapter.
<br>
```{r 12-assiniboine-day, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Mt Assiniboine, image one. Kyle Maguire. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-assiniboine_day.jpg"))
```
<br>
```{r 12-assiniboine-sunset, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Mt Assiniboine, image two. Kyle Maguire. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-assiniboine_sunset.jpg"))
```
</p>
::::
<br>
### Principle Points
_Wondering if this section is necessary. In order to explain principle points, we would need to explain principle planes, which requires an in-depth explanation of lens shapes, light physics, refraction, and sign conventions of light rays, which we truncated from the previous section. I suggest removing it if you don't think it is key to the chapter._
### Field of View
Though you are looking at a screen, there are likely other objects you can see above, below, and beside it. If someone walks into the room behind you, you may turn your head to look at them. If you hear the phone ringing in another room, you may walk over to see who is calling. All of these scenarios represent different uses of your *field of view* (FOV), or what you can see in a given moment.
Building and launching remote sensing systems is very costly, so its in our best interest to have them see as much as possible with the least expenditure of effort. We can maximize the FOV of a remote sensing system by giving it the freedom to "look around". Humans have three degrees of motion that allow us to change our FOV: scanning with our eyes, swiveling our heads, and shifting the position of our bodies. Optical systems can have three analogous degrees of motion: the motion of the lens elements (eyes), the motion of the camera (head), and the motion of the platform (body).
Examples of optical systems with different degrees of motion are given in table \@ref(fig:12-DoM-table).
<br>
```{r 12-DoM-table, fig.cap = fig_cap, out.width="100%", echo = FALSE}
fig_cap <- paste("Table showing examples of motion in remote sensing systems")
table <- as.data.frame(read.csv(here::here("data", "12", "DoM-table.csv")))
knitr::kable(table)
```
<br>
Remote sensing systems typically have zero, one, or two degrees of motion. Rarely do optical systems have all three. There are two reasons for this: first that its not necessary to have so much adjustment, and second that more moving parts means a higher possibility of malfunction, which is a real headache when the defunct system could be 700 km above the Earth's surface.
When a remote sensing system has two or more degrees of motion, we introduce a second measure of FOV called the *instantaneous field of view*, or *IFOV*. The FOV tells us what the remote sensing system is capable of seeing through its entire range of motion. The IFOV tells us what the remote sensing system can detect in a single given moment in that range of motion. Frequently, the IFOV of a system is a single pixel.
FOV and IFOV are measured as the angle subtended by the camera
<br>
:::: {.box-content .call-out-content}
::: {.box-title .call-out-top}
#### Did you know? {-}
:::
<p id="box-text">
Another hot summer day, another pesky fly buzzing around your home...and no matter how sneaky your approach is, the fly always seems to evade you and your swatter at the last possible second. You may be pleased to learn that it's not your slow reflexes foiling you, but the fly's advanced eyesight and a phenomenon called _compound vision_. Their relatively enormous "eyes" are in fact hundreds of smaller optical receptors called _ommatidia_ that provide a nearly 360-degree view of the fly's surroundings at any given time (https://www.nikonsmallworld.com/galleries/2019-photomicrography-competition/housefly-compound-eye-pattern). This huge *field of view* allows flies to steer through dense vegetation and, importantly, avoid being crushed by the latest print edition of the local newspaper.
</p>
::::
### Perspectives
All sighted creatures that we know of - save those from the water-dwelling genus *Copepoda* (https://askdruniverse.wsu.edu/2016/05/31/are-there-creatures-on-earth-with-one-eye/) - have two or more eyes. As the eyes are at different locations in space, each eye perceives a slightly different image. We also have precise information on the location of our eyes, the angle of our heads, and their distance to the ground surface. Our brains combine this information to create a three-dimensional scene. Our binocular ("two-eyed") vision means we can estimate the size, distance, and/or location of most objects - no further information needed.
However, almost all remote sensing systems have monocular ("one-eyed") vision, which limits them to producing flat, two-dimensional imagery. Using the image alone, we cannot readily measure the size, distance, and location of objects in a scene, nor can we compare it with other images in that location - a must for earth observation applications! Much like the auxiliary information our brain uses to create a three-dimensional scene, we can make a two-dimensional image "spatially explicit" by measuring the following:
1. The precise location of the camera in three-dimensional space
2. The positioning or perspective of the camera
Finding the camera location is fairly straightforward. We can use a Global Positioning System (GPS) to record our exact coordinates. Depending on the platform - terrestrial, aerial, or spaceborne - we can use various tools to record the platform's height, altitude, and/or elevation.
The camera perspective, including lens angle and direction, heavily influences how objects are perceived in imagery. Similarly to how accidentally opening your phone camera on selfie mode is not ideal for a flattering photo of your face, there are favourable perspectives for observing different natural phenomena. It's therefore of high importance to carefully select the best perspective for the desired use of your imagery.
The precise angle of the camera is also crucial. Thinking back to map projections in Chapter X, you will recall that representing our three-dimensional planet in a two-dimensional space causes certain regions to be heavily distorted in shape and size. You will also recall from earlier in this chapter how a camera's optical power changes the way objects at varying distances are seen.
There are four camera perspectives used for Earth observation discussed here: aerial, nadir (pronounced NAYD-er), oblique and hemispherical. Each one is briefly explained below with photos and some example applications.
#### Aerial Perspective
The plane of the lens is perpendicular to the ground plane and the lens vector is pointed straight downwards at the ground. Figure \@ref(fig:12-aerial-plane) is an example.
```{r 12-aerial-plane, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("[Aerial photo of forest, road, and river near Kitimat, BC](https://unsplash.com/photos/UFwW97AP0LI). [Ben den Engelsen](https://unsplash.com/@benjeeeman) on Unsplash. [Unsplash License](https://unsplash.com/license)")
knitr::include_graphics(here::here("images", "12-aerial-plane.jpg"))
```
Aerial imagery can be taken from remotely piloted aircraft systems (RPAs), airplanes, or satellites and thus has a huge range of resolutions and area coverage. It's highly sensitive to adverse weather, cloud cover or poor air quality, and variable lighting, so it needs to be carefully timed or collected at frequent intervals to account for unusable data.
Common applications:
- *Mapping land cover and land use*
- *Assessing ecosystem disturbance frequency and severity*
- *Calculating indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Difference Burn Ratio (NDBR)*
- *Collecting climate and weather data - think thermal maps, storm tracking, coastline changes, etc*
_It's important to note that the "ground plane" refers to a plane tangent to the geoid and not the physical ground surface. In variable terrain such as mountains, much of the ground will be seen "at an angle", but the overall camera perspective is unchanged. See figures \@ref(fig:12-aerial-good) and \@ref(fig:12-aerial-bad) for a visualization of what this looks like with regards to aerial imagery._
```{r 12-aerial-good, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("How aerial imagery should be taken. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-aerial-good.gif"))
```
```{r 12-aerial-bad, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("How aerial imagery should NOT be taken. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-aerial-bad.gif"))
```
#### Nadir Perpsective
The plane of the lens is perpendicular to the ground plane and the lens vector is pointed straight upwards towards the sky. Figure \@ref(fig:12-nadir-image) is an example of a nadir image.
```{r 12-nadir-image, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("[Looking towards the sky through the trees in a forest in Sweden](https://unsplash.com/photos/Mg1xk9i0yEU). [Joachim Bohlander](https://unsplash.com/@joacimbohlander) on Unsplash. [Unsplash License](https://unsplash.com/license)")
knitr::include_graphics(here::here("images", "12-nadir-image.jpg"))
```
<br>
For terrestrial ecosystem applications, nadir imagery will be taken from a terrestrial platform such as a handheld camera or LiDAR apparatus or a sub-canopy drone. Nadir imagery is less sensitive to cloud cover but will still be impacted by adverse weather and variable lighting. For example, an unmonitored camera placed on or near the ground can easily become obscured by dirt, water, or snow. Nadir perspective photos are ideally suited for capturing imagery where both the sky and structures on the ground need to be visible.
The term _nadir_ has a second meaning which is unrelated to perspective: it refers to the point on the ground plane *directly below* a platform, regardless of that platform's angle or perspective. For a scanning platform, the nadir is the lowest point in the scanning arc, typically on the ground. If you looked straight downwards at your feet in a standing position - an aerial perspective - you would your feet would be the nadir point. If you were laying on the ground looking at the sky - a nadir perspective - the place where your head rests on the ground would the the nadir point.
The opposite of nadir is the _zenith_. In the same analogy as before, if you were to stare at your feet from a standing position, the zenith would be where your head is. If you were to lay on the ground and look at the sky, the zenith would be directly above your head. Since there is no "sky plane" or point of reference in the upwards direction where we can clearly define the zenith height, this term more commonly refers to the highest point in the scanning arc for a moving platform. Think of it as a swing set: the starting position, where the swing is closest to the ground, is the nadir of your arc; the high point, where you slow to a stop and begin swing the other way, is the zenith.
Common applications:
- *Determining crown closure or canopy cover*
- *Viewing branch networks*
- *Measuring Leaf Area Index (LAI) for the upper canopy*
- *Astronomy and cosmological observation*
#### Oblique Perspective
The plane of the lens is _not_ perpendicular to the ground plane - essentially, if it's not aerial and it's not nadir, it's oblique. The vast majority of imagery you seen on social media, in the news, or on TV is from an oblique perspective. Oblique imagery is ideally suited for comparing object sizes or viewing areas which would be otherwise occluded in aerial imagery. Nearly all terrestrial platforms take oblique imagery and it is readily used for airborne and spaceborne platforms. A scanning platform will have an oblique perspective when it is not at the nadir or zenith of its scan arc. Figure \@ref(fig:12-oblique-louise) is an example of an oblique image of a natural area.
```{r 12-oblique-louise, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("[Oblique image of mountainside trees in Lake Louise, Alberta, Canada](https://unsplash.com/photos/P36KI_ws3vs). [Vlad Shapochnikov](https://unsplash.com/@vladshap) on Unsplash. [Unsplash License](https://unsplash.com/license)")
knitr::include_graphics(here::here("images", "12-oblique_louise.jpg"))
```
Common applications:
- *Viewing and measuring forest understorey and mid canopy*
- *Assessing post-disturbance recovery*
- *Assessing wildfire fuel loading*
- *Providing context for aerial and nadir imagery*
- *Comparing individual trees or vegetation*
#### Hemispherical Perspective
A hemispherical perspective has less to do with camera positioning and more to do with field of view, but it is still a "perspective". It captures imagery in the half-sphere (hemisphere) directly in front of the lens. The radius of the hemisphere is dependent on the lens size and optical power of the hemispherical lens used in the camera. Figure X below visualizes a hemispherical perspective.
<br>
```{r 12-hemisphere-view, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Visualization of a hemispherical perspective. Claire Armour. [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)")
knitr::include_graphics(here::here("images", "12-hemisphere-view.png"))
```
<br>
Due to the unusual shape of the lens, it captures a much larger proportion of a scene than we could normally take in without swiveling our heads or stitching photos into a mosaic, such as a panorama. A hemispherical lens will produce a circular rather than rectilinear output. The lens curvature will cause objects in the image to be highly distorted and, unlike rectilinear photos, cannot be easily divided into pixels for analysis. However, hemispherical perspectives are uniquely suited to viewing large expanses of a scene all at once. For this reason, it is highly favourable for sports cameras, security cameras, and natural monitoring. Figures \@ref(fig:12-hemispherical-lens) and \@ref(fig:12-newfoundland-from-space) shows hemispherical perspectives from two very different angles.
<br>
```{r 12-hemispherical-lens, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("[Hemispherical photo taken in the Bavarian forest](https://en.wikipedia.org/wiki/File:Hemispherical_photo1.jpg). [Martin Wegmann](https://commons.wikimedia.org/wiki/User:Wegmann). [CC BY 3.0 Unported](https://creativecommons.org/licenses/by/3.0/)")
knitr::include_graphics(here::here("images", "12-hemispherical-lens.jpg"))
```
<br>
```{r 12-newfoundland-from-space, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("[Picture of Newfoundland, Canada, taken by David Saint-Jacques during his space mission.](https://asc-csa.gc.ca/eng/multimedia/search/image/watch/13330). [Canadian Space Agency/NASA](https://asc-csa.gc.ca/eng/terms.asp#copyright). [CC BY 3.0 Unported](https://creativecommons.org/licenses/by/3.0/)")
knitr::include_graphics(here::here("images", "12-newfoundland_from_space.jpg"))
```
<br>
Common applications
- *Astronomy and cosmological observation*
- *Tracking road and trail usage by wildlife, humans, and/or transport vehicles*
- *Measuring Leaf Area Index (LAI) for the entire canopy*
LINE BREAK
## Sensors
Sensor intro - Paul
### Detectors
Paul
### Analog-to-Digital Converters
Paul
### Panchromatic Sensors
Paul
### Multispectral Sensors
Paul
### Hyperspectral Sensors
Paul
## Platforms
*_Relocated intro text from Claire:_*
The surface of the Earth is around 510 million square kilometres, or 323 billion hockey rinks, so as you can imagine, capturing even a small percentage of that area is a daunting task. Recent advancements in astrophysics and aerospace research have allowed us to conceive, build, and launch satellites from all over the globe to measure Earth from space. In fact, no fewer than 27,000 objects, including decommissioned platforms and debris, are currently whizzing through Earth’s orbit at eye-watering speeds of 7 kilometres per second or more (NASA, 2021). The image shown below is a visualization of all these objects in Earth’s orbit. If you are curious about where individual satellites are located, check out “Stuff in Space”, a website that visualizes satellite orbits all over the globe using real-time data. [Click here](http://www.stuffin.space/ "Stuff in Space") to open the webpage on your browser and try clicking some satellites to see their orbit and various parameters such as altitude, velocity, country of origin, and more.
### Terrestrial Systems
Paul
### Aerial systems
Paul
### Satellite Systems
Paul
### Orbital Physics
Paul
### Low Earth Orbit
Paul
### Medium Earth Orbit
Paul
### Polar and Sun-synchronous Orbit
Paul
### Geostationary Orbit
Paul
### Lagrange Points
Paul
## Important Satellite Systems for Environmental Management
### LANDSAT
### RADARSAt
### Advanced Very High Resolution Radiometer (AVHRR)
### Moderate Resolution Spectroradiometer (MODIS)
### ICESat
### Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)
### Defense Meteorology Satellite Program (DMSP)
### Visible Infrared Imaging Radiometer Suite (VIIRS)
### WildfireSat
:::: {.box-content .case-study-content}
::: {.box-title .case-study-top}
#### Case Study {-}
:::
#### Case study title max of forty characters {#box-text -}
<p id="box-text">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent non magna non nunc auctor eleifend. Etiam fringilla fermentum nisl at volutpat. Duis sodales lorem interdum, posuere neque sollicitudin, malesuada nisl. Sed laoreet commodo dui et cursus. Curabitur vel porta mauris. In lacinia ac ex ac varius. Nullam at vulputate ligula. Nunc sed faucibus urna, eget placerat sapien. Fusce ut lorem a ante congue consequat et sed tortor. Integer neque urna, vehicula at aliquam non, luctus non mi. Pellentesque et massa vehicula, cursus erat id, aliquet sapien. Sed rhoncus vehicula risus, vel aliquam eros porta eget. Etiam maximus, massa in pretium semper, est libero feugiat ipsum, quis tempor nunc libero in nibh. Nunc sollicitudin mattis metus ut rutrum. Donec urna purus, sodales id nibh in, ultricies tincidunt nibh. Proin porta accumsan aliquet.</p>
<p id="box-text">Cras convallis erat ante, ut tristique est tempor elementum. Nam mollis, ipsum at vehicula vestibulum, est magna finibus nisl, et laoreet metus massa et ipsum. Proin eget eros ac odio euismod volutpat et ac diam. Cras viverra ut libero vel pulvinar. Duis nisi magna, sagittis id ligula ac, efficitur commodo eros. Nulla tincidunt id nulla in lobortis. Sed non mi eu mi fermentum cursus. In elit velit, semper sed gravida at, imperdiet sed nibh. Aliquam quis massa malesuada, venenatis nunc sed, malesuada nulla. Vestibulum malesuada purus ut ex ullamcorper, ut blandit lacus lobortis. Curabitur scelerisque velit justo, quis porttitor purus efficitur ut. Phasellus nec arcu vestibulum, consequat ante id, elementum velit. Proin arcu tortor, cursus vitae sem id, congue semper urna. Integer tempus in est eu consequat. Donec sodales, quam vel finibus faucibus, leo ante dictum quam, id tempus ligula dolor quis erat. Curabitur non elementum sem. Mauris placerat fermentum orci non lacinia. Ut imperdiet dui lectus, ac malesuada felis euismod sed. Nulla non volutpat dui, in suscipit turpis.</p>
<p id="box-text">Case studies should have at least one image or map (no more than 2 total) and the written length should be around 300 words (shown above). Any references to external literature should by hyperlinked with the Digitial Object Identifier (DOI) permanent URL and [entered into the bibliography](https://bookdown.org/yihui/bookdown/citations.html){target="_blank"}. Avoid linking to external resources without a DOI and permanent URL. Contact Paul or try using the Leaflet package in R if you want to add an interactive web map.</p>
::::
## Summary
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut in dolor nibh. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent et augue scelerisque, consectetur lorem eu, auctor lacus. Fusce metus leo, aliquet at velit eu, aliquam vehicula lacus. Donec libero mauris, pharetra sed tristique eu, gravida ac ex. Phasellus quis lectus lacus. Vivamus gravida eu nibh ac malesuada. Integer in libero pellentesque, tincidunt urna sed, feugiat risus. Sed at viverra magna. Sed sed neque sed purus malesuada auctor quis quis massa.
### Reflection Questions {-}
1. Explain ipsum lorem.
2. Define ipsum lorem.
3. What is the role of ispum lorem?
4. How does ipsum lorem work?
### Practice Questions {-}
2. Given ipsum, solve for lorem.
3. Draw ipsum lorem.
`r if (knitr::is_html_output()) '
## Recommended Readings {-}
'`
Ensure all inline citations are properly referenced here.
```{r include=FALSE}
knitr::write_bib(c(
.packages(), 'bookdown', 'knitr', 'rmarkdown', 'htmlwidgets', 'webshot', 'DT',
'miniUI', 'tufte', 'servr', 'citr', 'rticles'
), 'packages.bib')
```