-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
1944 lines (1694 loc) · 121 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en"><head>
<script src="index_files/libs/clipboard/clipboard.min.js"></script>
<script src="index_files/libs/quarto-html/tabby.min.js"></script>
<script src="index_files/libs/quarto-html/popper.min.js"></script>
<script src="index_files/libs/quarto-html/tippy.umd.min.js"></script>
<link href="index_files/libs/quarto-html/tippy.css" rel="stylesheet">
<link href="index_files/libs/quarto-html/light-border.css" rel="stylesheet">
<link href="index_files/libs/quarto-html/quarto-html.min.css" rel="stylesheet" data-mode="light">
<link href="index_files/libs/quarto-html/quarto-syntax-highlighting.css" rel="stylesheet" id="quarto-text-highlighting-styles">
<script src="index_files/libs/quarto-contrib/videojs/video.min.js"></script>
<link href="index_files/libs/quarto-contrib/videojs/video-js.css" rel="stylesheet"><meta charset="utf-8">
<meta name="generator" content="quarto-1.4.555">
<meta name="author" content="Valentin Patilea^\dagger">
<meta name="author" content="Jeffrey S. Racine^\ddagger">
<title>quarto-input8998204b</title>
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no, minimal-ui">
<link rel="stylesheet" href="index_files/libs/revealjs/dist/reset.css">
<link rel="stylesheet" href="index_files/libs/revealjs/dist/reveal.css">
<style>
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
div.columns{display: flex; gap: min(4vw, 1.5em);}
div.column{flex: auto; overflow-x: auto;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
ul.task-list{list-style: none;}
ul.task-list li input[type="checkbox"] {
width: 0.8em;
margin: 0 0.8em 0.2em -1em; /* quarto-specific, see https://github.com/quarto-dev/quarto-cli/issues/4556 */
vertical-align: middle;
}
/* CSS for citations */
div.csl-bib-body { }
div.csl-entry {
clear: both;
margin-bottom: 0em;
}
.hanging-indent div.csl-entry {
margin-left:2em;
text-indent:-2em;
}
div.csl-left-margin {
min-width:2em;
float:left;
}
div.csl-right-inline {
margin-left:2em;
padding-left:1em;
}
div.csl-indent {
margin-left: 2em;
} </style>
<link rel="stylesheet" href="index_files/libs/revealjs/dist/theme/quarto.css">
<link rel="stylesheet" href="custom.css">
<link href="index_files/libs/revealjs/plugin/quarto-line-highlight/line-highlight.css" rel="stylesheet">
<link href="index_files/libs/revealjs/plugin/reveal-menu/menu.css" rel="stylesheet">
<link href="index_files/libs/revealjs/plugin/reveal-menu/quarto-menu.css" rel="stylesheet">
<link href="index_files/libs/revealjs/plugin/reveal-chalkboard/font-awesome/css/all.css" rel="stylesheet">
<link href="index_files/libs/revealjs/plugin/reveal-chalkboard/style.css" rel="stylesheet">
<link href="index_files/libs/revealjs/plugin/quarto-support/footer.css" rel="stylesheet">
<style type="text/css">
.callout {
margin-top: 1em;
margin-bottom: 1em;
border-radius: .25rem;
}
.callout.callout-style-simple {
padding: 0em 0.5em;
border-left: solid #acacac .3rem;
border-right: solid 1px silver;
border-top: solid 1px silver;
border-bottom: solid 1px silver;
display: flex;
}
.callout.callout-style-default {
border-left: solid #acacac .3rem;
border-right: solid 1px silver;
border-top: solid 1px silver;
border-bottom: solid 1px silver;
}
.callout .callout-body-container {
flex-grow: 1;
}
.callout.callout-style-simple .callout-body {
font-size: 1rem;
font-weight: 400;
}
.callout.callout-style-default .callout-body {
font-size: 0.9rem;
font-weight: 400;
}
.callout.callout-titled.callout-style-simple .callout-body {
margin-top: 0.2em;
}
.callout:not(.callout-titled) .callout-body {
display: flex;
}
.callout:not(.no-icon).callout-titled.callout-style-simple .callout-content {
padding-left: 1.6em;
}
.callout.callout-titled .callout-header {
padding-top: 0.2em;
margin-bottom: -0.2em;
}
.callout.callout-titled .callout-title p {
margin-top: 0.5em;
margin-bottom: 0.5em;
}
.callout.callout-titled.callout-style-simple .callout-content p {
margin-top: 0;
}
.callout.callout-titled.callout-style-default .callout-content p {
margin-top: 0.7em;
}
.callout.callout-style-simple div.callout-title {
border-bottom: none;
font-size: .9rem;
font-weight: 600;
opacity: 75%;
}
.callout.callout-style-default div.callout-title {
border-bottom: none;
font-weight: 600;
opacity: 85%;
font-size: 0.9rem;
padding-left: 0.5em;
padding-right: 0.5em;
}
.callout.callout-style-default div.callout-content {
padding-left: 0.5em;
padding-right: 0.5em;
}
.callout.callout-style-simple .callout-icon::before {
height: 1rem;
width: 1rem;
display: inline-block;
content: "";
background-repeat: no-repeat;
background-size: 1rem 1rem;
}
.callout.callout-style-default .callout-icon::before {
height: 0.9rem;
width: 0.9rem;
display: inline-block;
content: "";
background-repeat: no-repeat;
background-size: 0.9rem 0.9rem;
}
.callout-title {
display: flex
}
.callout-icon::before {
margin-top: 1rem;
padding-right: .5rem;
}
.callout.no-icon::before {
display: none !important;
}
.callout.callout-titled .callout-body > .callout-content > :last-child {
padding-bottom: 0.5rem;
margin-bottom: 0;
}
.callout.callout-titled .callout-icon::before {
margin-top: .5rem;
padding-right: .5rem;
}
.callout:not(.callout-titled) .callout-icon::before {
margin-top: 1rem;
padding-right: .5rem;
}
/* Callout Types */
div.callout-note {
border-left-color: #4582ec !important;
}
div.callout-note .callout-icon::before {
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAIKADAAQAAAABAAAAIAAAAACshmLzAAAEU0lEQVRYCcVXTWhcVRQ+586kSUMMxkyaElstCto2SIhitS5Ek8xUKV2poatCcVHtUlFQk8mbaaziwpWgglJwVaquitBOfhQXFlqlzSJpFSpIYyXNjBNiTCck7x2/8/LeNDOZxDuEkgOXe++553zfefee+/OYLOXFk3+1LLrRdiO81yNqZ6K9cG0P3MeFaMIQjXssE8Z1JzLO9ls20MBZX7oG8w9GxB0goaPrW5aNMp1yOZIa7Wv6o2ykpLtmAPs/vrG14Z+6d4jpbSKuhdcSyq9wGMPXjonwmESXrriLzFGOdDBLB8Y6MNYBu0dRokSygMA/mrun8MGFN3behm6VVAwg4WR3i6FvYK1T7MHo9BK7ydH+1uurECoouk5MPRyVSBrBHMYwVobG2aOXM07sWrn5qgB60rc6mcwIDJtQrnrEr44kmy+UO9r0u9O5/YbkS9juQckLed3DyW2XV/qWBBB3ptvI8EUY3I9p/67OW+g967TNr3Sotn3IuVlfMLVnsBwH4fsnebJvyGm5GeIUA3jljERmrv49SizPYuq+z7c2H/jlGC+Ghhupn/hcapqmcudB9jwJ/3jvnvu6vu5lVzF1fXyZuZZ7U8nRmVzytvT+H3kilYvH09mLWrQdwFSsFEsxFVs5fK7A0g8gMZjbif4ACpKbjv7gNGaD8bUrlk8x+KRflttr22JEMRUbTUwwDQScyzPgedQHZT0xnx7ujw2jfVfExwYHwOsDTjLdJ2ebmeQIlJ7neo41s/DrsL3kl+W2lWvAga0tR3zueGr6GL78M3ifH0rGXrBC2aAR8uYcIA5gwV8zIE8onoh8u0Fca/ciF7j1uOzEnqcIm59sEXoGc0+z6+H45V1CvAvHcD7THztu669cnp+L0okAeIc6zjbM/24LgGM1gZk7jnRu1aQWoU9sfUOuhrmtaPIO3YY1KLLWZaEO5TKUbMY5zx8W9UJ6elpLwKXbsaZ4EFl7B4bMtDv0iRipKoDQT2sNQI9b1utXFdYisi+wzZ/ri/1m7QfDgEuvgUUEIJPq3DhX/5DWNqIXDOweC2wvIR90Oq3lDpdMIgD2r0dXvGdsEW5H6x6HLRJYU7C69VefO1x8Gde1ZFSJLfWS1jbCnhtOPxmpfv2LXOA2Xk2tvnwKKPFuZ/oRmwBwqRQDcKNeVQkYcOjtWVBuM/JuYw5b6isojIkYxyYAFn5K7ZBF10fea52y8QltAg6jnMqNHFBmGkQ1j+U43HMi2xMar1Nv0zGsf1s8nUsmUtPOOrbFIR8bHFDMB5zL13Gmr/kGlCkUzedTzzmzsaJXhYawnA3UmARpiYj5ooJZiUoxFRtK3X6pgNPv+IZVPcnwbOl6f+aBaO1CNvPW9n9LmCp01nuSaTRF2YxHqZ8DYQT6WsXT+RD6eUztwYLZ8rM+rcPxamv1VQzFUkzFXvkiVrySGQgJNvXHJAxiU3/NwiC03rSf05VBaPtu/Z7/B8Yn/w7eguloAAAAAElFTkSuQmCC');
}
div.callout-note.callout-style-default .callout-title {
background-color: #dae6fb
}
div.callout-important {
border-left-color: #d9534f !important;
}
div.callout-important .callout-icon::before {
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAIKADAAQAAAABAAAAIAAAAACshmLzAAAEKklEQVRYCcVXTWhcVRS+575MJym48A+hSRFr00ySRQhURRfd2HYjk2SSTokuBCkU2o0LoSKKraKIBTcuFCoidGFD08nkBzdREbpQ1EDNIv8qSGMFUboImMSZd4/f9zJv8ibJMC8xJQfO3HPPPef7zrvvvnvviIkpC9nsw0UttFunbUhpFzFtarSd6WJkStVMw5xyVqYTvkwfzuf/5FgtkVoB0729j1rjXwThS7Vio+Mo6DNnvLfahoZ+i/o32lULuJ3NNiz7q6+pyAUkJaFF6JwaM2lUJlV0MlnQn5aTRbEu0SEqHUa0A4AdiGuB1kFXRfVyg5d87+Dg4DL6m2TLAub60ilj7A1Ec4odSAc8X95sHh7+ZRPCFo6Fnp7HfU/fBng/hi10CjCnWnJjsxvDNxWw0NfV6Rv5GgP3I3jGWXumdTD/3cbEOP2ZbOZp69yniG3FQ9z1jD7bnBu9Fc2tKGC2q+uAJOQHBDRiZX1x36o7fWBs7J9ownbtO+n0/qWkvW7UPIfc37WgT6ZGR++EOJyeQDSb9UB+DZ1G6DdLDzyS+b/kBCYGsYgJbSQHuThGKRcw5xdeQf8YdNHsc6ePXrlSYMBuSIAFTGAtQo+VuALo4BX83N190NWZWbynBjhOHsmNfFWLeL6v+ynsA58zDvvAC8j5PkbOcXCMg2PZFk3q8MjI7WAG/Dp9AwP7jdGBOOQkAvlFUB+irtm16I1Zw9YBcpGTGXYmk3kQIC/Cds55l+iMI3jqhjAuaoe+am2Jw5GT3Nbz3CkE12NavmzN5+erJW7046n/CH1RO/RVa8lBLozXk9uqykkGAyRXLWlLv5jyp4RFsG5vGVzpDLnIjTWgnRy2Rr+tDKvRc7Y8AyZq10jj8DqXdnIRNtFZb+t/ZRtXcDiVnzpqx8mPcDWxgARUqx0W1QB9MeUZiNrV4qP+Ehc+BpNgATsTX8ozYKL2NtFYAHc84fG7ndxUPr+AR/iQSns7uSUufAymwDOb2+NjK27lEFocm/EE2WpyIy/Hi66MWuMKJn8RvxIcj87IM5Vh9663ziW36kR0HNenXuxmfaD8JC7tfKbrhFr7LiZCrMjrzTeGx+PmkosrkNzW94ObzwocJ7A1HokLolY+AvkTiD/q1H0cN48c5EL8Crkttsa/AXQVDmutfyku0E7jShx49XqV3MFK8IryDhYVbj7Sj2P2eBxwcXoe8T8idsKKPRcnZw1b+slFTubwUwhktrfnAt7J++jwQtLZcm3sr9LQrjRzz6cfMv9aLvgmnAGvpoaGLxM4mAEaLV7iAzQ3oU0IvD5x9ix3yF2RAAuYAOO2f7PEFWCXZ4C9Pb2UsgDeVnFSpbFK7/IWu7TPTvBqzbGdCHOJQSxiEjt6IyZmxQyEJHv6xyQsYk//moVFsN2zP6fRImjfq7/n/wFDguUQFNEwugAAAABJRU5ErkJggg==');
}
div.callout-important.callout-style-default .callout-title {
background-color: #f7dddc
}
div.callout-warning {
border-left-color: #f0ad4e !important;
}
div.callout-warning .callout-icon::before {
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAIKADAAQAAAABAAAAIAAAAACshmLzAAAETklEQVRYCeVWW2gcVRg+58yaTUnizqbipZeX4uWhBEniBaoUX1Ioze52t7sRq6APio9V9MEaoWlVsFasRq0gltaAPuxms8lu0gcviE/FFOstVbSIxgcv6SU7EZqmdc7v9+9mJtNks51NTUH84ed889/PP+cmxP+d5FIbMJmNbpREu4WUkiTtCicKny0l1pIKmBzovF2S+hIJHX8iEu3hZJ5lNZGqyRrGSIQpq15AzF28jgpeY6yk6GVdrfFqdrD6Iw+QlB8g0YS2g7dyQmXM/IDhBhT0UCiRf59lfqmmDvzRt6kByV/m4JjtzuaujMUM2c5Z2d6JdKrRb3K2q6mA+oYVz8JnDdKPmmNthzkAk/lN63sYPgevrguc72aZX/L9C6x09GYyxBgCX4NlvyGUHOKELlm5rXeR1kchuChJt4SSwyddZRXgvwMGvYo4QSlk3/zkHD8UHxwVJA6zjZZqP8v8kK8OWLnIZtLyCAJagYC4rTGW/9Pqj92N/c+LUaAj27movwbi19tk/whRCIE7Q9vyI6yvRpftAKVTdUjOW40X3h5OXsKCdmFcx0xlLJoSuQngnrJe7Kcjm4OMq9FlC7CMmScQANuNvjfP3PjGXDBaUQmbp296S5L4DrpbrHN1T87ZVEZVCzg1FF0Ft+dKrlLukI+/c9ENo+TvlTDbYFvuKPtQ9+l052rXrgKoWkDAFnvh0wTOmYn8R5f4k/jN/fZiCM1tQx9jQQ4ANhqG4hiL0qIFTGViG9DKB7GYzgubnpofgYRwO+DFjh0Zin2m4b/97EDkXkc+f6xYAPX0KK2I/7fUQuwzuwo/L3AkcjugPNixC8cHf0FyPjWlItmLxWw4Ou9YsQCr5fijMGoD/zpdRy95HRysyXA74MWOnscpO4j2y3HAVisw85hX5+AFBRSHt4ShfLFkIMXTqyKFc46xdzQM6XbAi702a7sy04J0+feReMFKp5q9esYLCqAZYw/k14E/xcLLsFElaornTuJB0svMuJINy8xkIYuL+xPAlWRceH6+HX7THJ0djLUom46zREu7tTkxwmf/FdOZ/sh6Q8qvEAiHpm4PJ4a/doJe0gH1t+aHRgCzOvBvJedEK5OFE5jpm4AGP2a8Dxe3gGJ/pAutug9Gp6he92CsSsWBaEcxGx0FHytmIpuqGkOpldqNYQK8cSoXvd+xLxXADw0kf6UkJNFtdo5MOgaLjiQOQHcn+A6h5NuL2s0qsC2LOM75PcF3yr5STuBSAcGG+meA14K/CI21HcS4LBT6tv0QAh8Dr5l93AhZzG5ZJ4VxAqdZUEl9z7WJ4aN+svMvwHHL21UKTd1mqvChH7/Za5xzXBBKrUcB0TQ+Ulgkfbi/H/YT5EptrGzsEK7tR1B7ln9BBwckYfMiuSqklSznIuoIIOM42MQO+QnduCoFCI0bpkzjCjddHPN/F+2Yu+sd9bKNpVwHhbS3LluK/0zgfwD0xYI5dXuzlQAAAABJRU5ErkJggg==');
}
div.callout-warning.callout-style-default .callout-title {
background-color: #fcefdc
}
div.callout-tip {
border-left-color: #02b875 !important;
}
div.callout-tip .callout-icon::before {
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAIKADAAQAAAABAAAAIAAAAACshmLzAAADr0lEQVRYCe1XTWgTQRj9ZjZV8a9SPIkKgj8I1bMHsUWrqYLVg4Ue6v9BwZOxSYsIerFao7UiUryIqJcqgtpimhbBXoSCVxUFe9CTiogUrUp2Pt+3aUI2u5vdNh4dmMzOzHvvezuz8xNFM0mjnbXaNu1MvFWRXkXEyE6aYOYJpdW4IXuA4r0fo8qqSMDBU0v1HJUgVieAXxzCsdE/YJTdFcVIZQNMyhruOMJKXYFoLfIfIvVIMWdsrd+Rpd86ZmyzzjJmLStqRn0v8lzkb4rVIXvnpScOJuAn2ACC65FkPzEdEy4TPWRLJ2h7z4cArXzzaOdKlbOvKKX25Wl00jSnrwVxAg3o4dRxhO13RBSdNvH0xSARv3adTXbBdTf64IWO2vH0LT+cv4GR1DJt+DUItaQogeBX/chhbTBxEiZ6gftlDNXTrvT7co4ub5A6gp9HIcHvzTa46OS5fBeP87Qm0fQkr4FsYgVQ7Qg+ZayaDg9jhg1GkWj8RG6lkeSacrrHgDaxdoBiZPg+NXV/KifMuB6//JmYH4CntVEHy/keA6x4h4CU5oFy8GzrBS18cLJMXcljAKB6INjWsRcuZBWVaS3GDrqB7rdapVIeA+isQ57Eev9eCqzqOa81CY05VLd6SamW2wA2H3SiTbnbSxmzfp7WtKZkqy4mdyAlGx7ennghYf8voqp9cLSgKdqNfa6RdRsAAkPwRuJZNbpByn+RrJi1RXTwdi8RQF6ymDwGMAtZ6TVE+4uoKh+MYkcLsT0Hk8eAienbiGdjJHZTpmNjlbFJNKDVAp2fJlYju6IreQxQ08UJDNYdoLSl6AadO+fFuCQqVMB1NJwPm69T04Wv5WhfcWyfXQB+wXRs1pt+nCknRa0LVzSA/2B+a9+zQJadb7IyyV24YAxKp2Jqs3emZTuNnKxsah+uabKbMk7CbTgJx/zIgQYErIeTKRQ9yD9wxVof5YolPHqaWo7TD6tJlh7jQnK5z2n3+fGdggIOx2kaa2YI9QWarc5Ce1ipNWMKeSG4DysFF52KBmTNMmn5HqCFkwy34rDg05gDwgH3bBi+sgFhN/e8QvRn8kbamCOhgrZ9GJhFDgfcMHzFb6BAtjKpFhzTjwv1KCVuxHvCbsSiEz4CANnj84cwHdFXAbAOJ4LTSAawGWFn5tDhLMYz6nWeU2wJfIhmIJBefcd/A5FWQWGgrWzyORZ3Q6HuV+Jf0Bj+BTX69fm1zWgK7By1YTXchFDORywnfQ7GpzOo6S+qECrsx2ifVQAAAABJRU5ErkJggg==');
}
div.callout-tip.callout-style-default .callout-title {
background-color: #ccf1e3
}
div.callout-caution {
border-left-color: #fd7e14 !important;
}
div.callout-caution .callout-icon::before {
background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAERlWElmTU0AKgAAAAgAAYdpAAQAAAABAAAAGgAAAAAAA6ABAAMAAAABAAEAAKACAAQAAAABAAAAIKADAAQAAAABAAAAIAAAAACshmLzAAACV0lEQVRYCdVWzWoUQRCuqp2ICBLJXgITZL1EfQDBW/bkzUMUD7klD+ATSHBEfAIfQO+iXsWDxJsHL96EHAwhgzlkg8nBg25XWb0zIb0zs9muYYWkoKeru+vn664fBqElyZNuyh167NXJ8Ut8McjbmEraKHkd7uAnAFku+VWdb3reSmRV8PKSLfZ0Gjn3a6Xlcq9YGb6tADjn+lUfTXtVmaZ1KwBIvFI11rRXlWlatwIAAv2asaa9mlB9wwygiDX26qaw1yYPzFXg2N1GgG0FMF8Oj+VIx7E/03lHx8UhvYyNZLN7BwSPgekXXLribw7w5/c8EF+DBK5idvDVYtEEwMeYefjjLAdEyQ3M9nfOkgnPTEkYU+sxMq0BxNR6jExrAI31H1rzvLEfRIdgcv1XEdj6QTQAS2wtstEALLG1yEZ3QhH6oDX7ExBSFEkFINXH98NTrme5IOaaA7kIfiu2L8A3qhH9zRbukdCqdsA98TdElyeMe5BI8Rs2xHRIsoTSSVFfCFCWGPn9XHb4cdobRIWABNf0add9jakDjQJpJ1bTXOJXnnRXHRf+dNL1ZV1MBRCXhMbaHqGI1JkKIL7+i8uffuP6wVQAzO7+qVEbF6NbS0LJureYcWXUUhH66nLR5rYmva+2tjRFtojkM2aD76HEGAD3tPtKM309FJg5j/K682ywcWJ3PASCcycH/22u+Bh7Aa0ehM2Fu4z0SAE81HF9RkB21c5bEn4Dzw+/qNOyXr3DCTQDMBOdhi4nAgiFDGCinIa2owCEChUwD8qzd03PG+qdW/4fDzjUMcE1ZpIAAAAASUVORK5CYII=');
}
div.callout-caution.callout-style-default .callout-title {
background-color: #ffe5d0
}
</style>
<style type="text/css">
.reveal div.sourceCode {
margin: 0;
overflow: auto;
}
.reveal div.hanging-indent {
margin-left: 1em;
text-indent: -1em;
}
.reveal .slide:not(.center) {
height: 100%;
}
.reveal .slide.scrollable {
overflow-y: auto;
}
.reveal .footnotes {
height: 100%;
overflow-y: auto;
}
.reveal .slide .absolute {
position: absolute;
display: block;
}
.reveal .footnotes ol {
counter-reset: ol;
list-style-type: none;
margin-left: 0;
}
.reveal .footnotes ol li:before {
counter-increment: ol;
content: counter(ol) ". ";
}
.reveal .footnotes ol li > p:first-child {
display: inline-block;
}
.reveal .slide ul,
.reveal .slide ol {
margin-bottom: 0.5em;
}
.reveal .slide ul li,
.reveal .slide ol li {
margin-top: 0.4em;
margin-bottom: 0.2em;
}
.reveal .slide ul[role="tablist"] li {
margin-bottom: 0;
}
.reveal .slide ul li > *:first-child,
.reveal .slide ol li > *:first-child {
margin-block-start: 0;
}
.reveal .slide ul li > *:last-child,
.reveal .slide ol li > *:last-child {
margin-block-end: 0;
}
.reveal .slide .columns:nth-child(3) {
margin-block-start: 0.8em;
}
.reveal blockquote {
box-shadow: none;
}
.reveal .tippy-content>* {
margin-top: 0.2em;
margin-bottom: 0.7em;
}
.reveal .tippy-content>*:last-child {
margin-bottom: 0.2em;
}
.reveal .slide > img.stretch.quarto-figure-center,
.reveal .slide > img.r-stretch.quarto-figure-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.reveal .slide > img.stretch.quarto-figure-left,
.reveal .slide > img.r-stretch.quarto-figure-left {
display: block;
margin-left: 0;
margin-right: auto;
}
.reveal .slide > img.stretch.quarto-figure-right,
.reveal .slide > img.r-stretch.quarto-figure-right {
display: block;
margin-left: auto;
margin-right: 0;
}
</style>
<script>
window.MathJax = {
tex: {
tags: 'ams'
}
};
</script>
</head>
<body class="quarto-light">
<div class="reveal">
<div class="slides">
<section id="title-slide" class="quarto-title-block center">
<h1 class="title"><div class="line-block">Locally Adaptive Online<br>
Functional Data Analysis</div></h1>
<div class="quarto-title-authors">
<div class="quarto-title-author">
<div class="quarto-title-author-name">
Valentin Patilea<span class="math inline">\(^\dagger\)</span>
</div>
<p class="quarto-title-affiliation">
ENSAI & CREST<span class="math inline">\(^\dagger\)</span>, valentin.patilea@ensai.fr
</p>
</div>
<div class="quarto-title-author">
<div class="quarto-title-author-name">
Jeffrey S. Racine<span class="math inline">\(^\ddagger\)</span>
</div>
<p class="quarto-title-affiliation">
McMaster University<span class="math inline">\(^\ddagger\)</span>, racinej@mcmaster.ca
</p>
</div>
</div>
<p class="date">Tuesday, June 25, 2024</p>
</section>
<section id="slide-pro-tips" class="title-slide slide level1 center">
<h1>Slide Pro-Tips</h1>
<div>
<ul>
<li><p>Link to slides - <a href="https://jeffreyracine.github.io/Braga">jeffreyracine.github.io/Braga</a> (case sensitive, <a href="https://jeffreyracine-github-io.translate.goog/Braga/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=wapp#/title-slide">Google Translate</a>)</p></li>
<li><p>Link to paper - <a href="https://ideas.repec.org/p/mcm/deptwp/2024-04.html">https://ideas.repec.org/p/mcm/deptwp/2024-04.html</a></p></li>
<li><p>View <strong>full screen</strong> by pressing the F key (press the Esc key to revert)</p></li>
<li><p>Access <strong>navigation menu</strong> by pressing the M key (click X in navigation menu to close)</p></li>
<li><p><strong>Advance</strong> using arrow keys</p></li>
<li><p><strong>Zoom</strong> in by holding down the Alt key in MS Windows, Opt key in macOS or Ctrl key in Linux, and clicking on any screen element (Alt/Opt/Ctrl click again to zoom out)</p>
<!--
- Use **copy to clipboard** button for R code blocks (upper right in block) to copy and paste into R/RStudio
--></li>
<li><p><strong>Export to a PDF</strong> by pressing the E key (wait a few seconds, then print [or print using system dialog], enable landscape layout, then save as PDF - press the E key to revert)</p></li>
<li><p>Enable drawing tools - chalk <strong>board</strong> by pressing the B key (B to revert), notes <strong>canvas</strong> by pressing the C key (C to revert), press the Del key to erase, press the D key to <strong>download drawings</strong></p></li>
</ul>
</div>
<aside class="notes">
<p>Encourage participants to print/save a PDF copy of the slides as there is no guarantee that this material will be there when they realize it might be useful</p>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="abstract" class="title-slide slide level1 center">
<h1>Abstract</h1>
<p>One drawback with classical smoothing methods (kernels, splines, wavelets etc.) is their reliance on assuming the degree of smoothness (and thereby assuming continuous differentiability up to some order) for the underlying object being estimated. However, the underlying object may in fact be irregular (i.e., non-smooth and even perhaps nowhere differentiable) and, as well, the (ir)regularity of the underlying function may vary across its support. Elaborate adaptive methods for curve estimation have been proposed, however, their intrinsic complexity presents a formidable and perhaps even insurmountable barrier to their widespread adoption by practitioners. We contribute to the functional data literature by providing a pointwise MSE-optimal, data-driven, iterative plug-in estimator of “local regularity” and a computationally attractive, recursive, online updating method. In so doing we are able to separate measurement error “noise” from “irregularity” thanks to “replication”, a hallmark of functional data. Our results open the door for the construction of minimax optimal rates, “honest” confidence intervals, and the like, for various quantities of interest.</p>
</section>
<section>
<section id="outline-of-talk" class="title-slide slide level1 center">
<h1>Outline of Talk</h1>
<div>
<ul>
<li><p>Modern data is often <em>functional</em> in nature (e.g., an electrocardiogram (ECG) and many other measures recorded by wearable devices)</p></li>
<li><p>The analysis of functional data requires <em>nonparametric methods</em></p></li>
<li><p>However, nonparametric methods rely on <em>smoothness assumptions</em> (i.e., require you to assume something we don’t know)</p></li>
<li><p>We show how we can learn the degree of (non)smoothness in functional data settings and we separate this from <em>measurement noise</em> (this cannot be done with classical data)</p></li>
<li><p>This allows us to conduct functional data analysis that is optimal (we do this in a statistical framework)</p></li>
<li><p>We emphasize <em>online</em> computation (i.e., how to update when new functional data becomes available)</p></li>
</ul>
</div>
<aside class="notes">
<ul>
<li><p>Present overview of slides</p></li>
<li><p>Mention appendices and extra material (don’t be fooled by slide numbers - for your leisure not to be covered in the allotted time)</p></li>
<li><p>Presuming some may be unfamiliar with functional data elements, start with classical versus function sample elements</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="classical-versus-functional-data" class="slide level2 center">
<h2>Classical Versus Functional Data</h2>
<ul>
<li class="fragment"><p>A <strong>defining feature of classical regression analysis</strong> is that</p>
<ul>
<li class="fragment"><p><strong>sample elements are random pairs</strong>, <span class="math inline">\((y_i,x_i)\)</span></p></li>
<li class="fragment"><p>the <em>function of interest</em> <span class="math inline">\(\mathbb{E}(Y|X=x)\)</span> is <em>non-random</em></p></li>
</ul></li>
<li class="fragment"><p>A <strong>defining feature of functional data analysis </strong> is that</p>
<ul>
<li class="fragment"><p><strong>sample elements are random functions</strong>, <span class="math inline">\(X^{(i)}\)</span></p></li>
<li class="fragment"><p>these are <em>also functions of interest</em></p></li>
</ul></li>
<li class="fragment"><p>The following figure (<a href="#/fig-sampleelements" class="quarto-xref">Figure 1</a>) presents <span class="math inline">\(N=25\)</span> sample elements (classical left plot, functional right plot)</p></li>
</ul>
<aside class="notes">
<ul>
<li>Point out that we are really in quite a different world when dealing with functions and need to think carefully about the data generating process</li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="classical-versus-functional-data-1" class="slide level2 center">
<h2>Classical Versus Functional Data</h2>
<img data-src="index_files/figure-revealjs/fig-sampleelements-1.png" class="quarto-figure quarto-figure-center r-stretch" width="960"><p class="caption">
Figure 1: Classical Versus Functional Sample Elements (<span class="math inline">\(N=25\)</span>)
</p></section>
<section id="functional-data" class="slide level2 center">
<h2>Functional Data</h2>
<ul>
<li class="fragment"><p>FDA is <em>the statistical analysis of samples of curves</em> (i.e., samples of random variables taking values in spaces of functions)</p></li>
<li class="fragment"><p>FDA has heterogeneous, longitudinal aspects (“individual trajectories”)</p></li>
<li class="fragment"><p>Curves are continuums, so <em>we never know the curve values at all points</em></p>
<ul>
<li class="fragment"><p>The curves are <em>only available at discrete points</em> (e.g., <span class="math inline">\((Y^{(i)}_m , T^{(i)}_m) \in\mathbb R \times [0,1]\)</span>)</p></li>
<li class="fragment"><p>The points at which curves are available can <em>differ across curves</em></p></li>
<li class="fragment"><p>The curves may be <em>measured with error</em></p></li>
</ul></li>
<li class="fragment"><p>Consider measurements taken from 1 random curve:</p>
<ul>
<li class="fragment"><p><a href="#/fig-nonnoisyfuncall" class="quarto-xref">Figure 2</a> is measured without error from an <em>irregular</em> curve</p></li>
<li class="fragment"><p><a href="#/fig-noisyfuncall" class="quarto-xref">Figure 3</a> is measured with error from a <em>smooth</em> curve</p></li>
<li class="fragment"><p><a href="#/fig-mfbm" class="quarto-xref">Figure 4</a> displays <em>varying</em> (ir)regularity <em>and</em> measurement noise</p></li>
</ul></li>
</ul>
<aside class="notes">
<ul>
<li><p>Stop at “The curves may be <em>measured with error</em>”</p></li>
<li><p>Screen mirror RStudio with manipulate_mfbr.R and walk through the issues</p></li>
<li><p>Then proceed to discuss what we know and don’t know and “crimes” (assuming smoothness)</p></li>
<li><p>Some researchers <span class="citation" data-cites="horvath_kokoszka:2012">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Horváth and Kokoszka 2012</a>)</span> presume the curves are measured <strong>without error</strong> at <em>any t</em> (unrealistic to say the least, all theory in 5 pages), i.e., they suppose they have the <em>true curves</em> at any point</p></li>
<li><p>Sometimes what people do in practice is a <strong>crime</strong>… they smooth discrete points with splines <em>then</em> <strong>proceed presuming they have the true curves</strong> (theory is blindly applied forgetting curves are not the true one - there is no theory on FPCA which corresponds to real data)</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="fda-sample-element-1-random-function" class="slide level2 center">
<h2>FDA Sample Element (1 Random Function)</h2>
<img data-src="index_files/figure-revealjs/fig-nonnoisyfuncall-1.png" class="quarto-figure quarto-figure-center r-stretch" width="960"><p class="caption">
Figure 2: Irregular Function, Data Measured Without Error
</p></section>
<section id="fda-sample-element-1-random-function-1" class="slide level2 center">
<h2>FDA Sample Element (1 Random Function)</h2>
<img data-src="index_files/figure-revealjs/fig-noisyfuncall-1.png" class="quarto-figure quarto-figure-center r-stretch" width="960"><p class="caption">
Figure 3: Regular Function, Noisy Data
</p></section>
<section id="fda-sample-element-1-random-function-2" class="slide level2 center">
<h2>FDA Sample Element (1 Random Function)</h2>
<img data-src="index_files/figure-revealjs/fig-mfbm-1.png" class="quarto-figure quarto-figure-center r-stretch" width="960"><p class="caption">
Figure 4: Irregular Function, Varying Regularity, Noisy Data
</p><aside class="notes">
<ul>
<li><p>Tell audience we will now look at two real-world datasets</p></li>
<li><p>The first is quite “smooth” looking</p></li>
<li><p>The second much less to</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="fda-sample-elements-random-functions" class="slide level2 center">
<h2>FDA Sample Elements (Random Functions)</h2>
<img data-src="index_files/figure-revealjs/fig-growth-1.png" class="quarto-figure quarto-figure-center r-stretch" width="960"><p class="caption">
Figure 5: Berkeley Growth Study Data
</p><aside class="notes">
<ul>
<li><p>Point out that the data is unevenly spaced</p></li>
<li><p>4 measurements years 1-2, 1 per year until 8, 2 per year thereafter</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="fda-sample-elements-random-functions-1" class="slide level2 center">
<h2>FDA Sample Elements (Random Functions)</h2>
<img data-src="index_files/figure-revealjs/fig-canWeather-1.png" class="quarto-figure quarto-figure-center r-stretch" width="960"><p class="caption">
Figure 6: Canadian Weather Study Data
</p><aside class="notes">
<ul>
<li><p>Point out that the data is evenly spaced</p></li>
<li><p>Curve less smooth than previous example</p></li>
<li><p>“Common design” (will come back to this)</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
<!--
## Modelling Functional Data
- Why not use classical linear regression to model each curve?
- it would be difficult to justify given the nature of the data
- Why not use existing nonparametric methods to model each curve?
- it would be hard to justify smoothness assumptions, for one
- Why not use classical time series or longitudinal/panel analysis
- the observed points may not be equally spaced
- the various curves may not be sampled at the same time points
- the observed points may not be stationary
- *Ideally*, one would keep the observation unit as collected, and model data as realizations in a suitable space of objects (i.e., space of curves)
::: {.notes}
- Modern data often come in the form of curves (wearables, health measurements)
- Point out we need a new framework
:::
# Scope, Highlights, and Related Work
## Project Scope
- We drill down on *unknown regularity of unknown functions*
- We exploit a key feature of FDA, namely, *replication*
- We use data-driven local nonparametric kernel methods
- Our method adapts, *pointwise*, to
- the *local regularity* of the underlying process (i.e., the Hölder exponent $H_t$ defined shortly and to *measurement noise* (i.e., we are able to separate *noise* from *regularity* thanks to *replication*)
- the *purpose(s)* of estimation (i.e., $\mu(t)$ versus $\Gamma(s,t)$ defined shortly)
- Our method is computationally tractable:
- we have processed millions of curves, each containing hundreds of sample points, on a laptop in real-time as new online data arrives
::: {.notes}
- If I were to coin a slogan for this talk, it would be "think vertically!"
- Replication can be further exploited (i.e. beyond existing uses)
:::
## Highlights and Related Work
- We contribute to the literature by providing i) an MSE-optimal, data-driven, iterative plug-in estimator of local regularity and ii) a computationally attractive, recursive, online updating method
- Related work includes
- @gloter_hoffmann:2007, who consider noisy measurements of *one* sample path from a scaled fractional Brownian motion (fBm) with unknown (*scalar*, i.e., *constant*) Hurst parameter and unknown scale with measurements on an equidistant grid and heteroscedastic noise
- @gini_nickl:2010, @cai_low_ma:2014, and @ray:2017, who consider the construction of confidence or credible sets for a single curve of unknown regularity (the latter in a Bayesian framework)
- @golovkine_klutchnikoff_patilea:2022, who propose a non-smoothing estimator for local regularity of the trajectories of a stochastic process using order statistics (they also adopt a smoothing component, but this relies on an unknown constant $K_0$)
-->
</section></section>
<section>
<section id="functional-data-setting" class="title-slide slide level1 center">
<h1>Functional Data Setting</h1>
</section>
<section id="functional-data-1" class="slide level2 center">
<h2>Functional Data</h2>
<ul>
<li class="fragment"><p>Functional data carry information <em>along</em> the curves and <em>among</em> the curves</p></li>
<li class="fragment"><p>Consider a second-order stochastic process with continuous trajectories, <span class="math inline">\(X = (X_t : t\in [0,1])\)</span></p></li>
<li class="fragment"><p>The mean and covariance functions are <span class="math display">\[\begin{equation*}
\mu(t) = \mathbb{E}(X_t)\text{ and } \Gamma (s,t) = \mathbb{E}\left\{ [X_s - \mu(s)] [X_t-\mu(t)]\right\},\, s,t\in [0,1]
\end{equation*}\]</span></p></li>
<li class="fragment"><p>The framework we consider is one where independent sample path realizations <span class="math inline">\(X^{(i)}\)</span>, <span class="math inline">\(i=1,2\ldots,N\)</span>, of <span class="math inline">\(X\)</span> are measured with error at <em>discrete</em> times</p></li>
<li class="fragment"><p>The data associated with the <span class="math inline">\(i\)</span>th sample path <span class="math inline">\(X^{(i)}\)</span> consists of the pairs <span class="math inline">\((Y^{(i)}_m , T^{(i)}_m) \in\mathbb R \times [0,1]\)</span> generated as <span class="math display">\[\begin{equation*}
Y^{(i)}_m = X^{(i)}(T^{(i)}_m) + \varepsilon^{(i)}_m,
\qquad 1\leq m \leq M_i
\end{equation*}\]</span></p></li>
</ul>
<aside class="notes">
<ul>
<li><p>In probability theory and related fields, a <strong>stochastic</strong> or <strong>random process</strong> is a mathematical object usually defined as a family of random variables.</p></li>
<li><p>The term <strong>random function</strong> is also used to refer to a stochastic or random process, because a stochastic process can also be interpreted as a <em>random element in a function space</em>. The terms stochastic process and random process are used interchangeably.</p></li>
<li><p>https://en.wikipedia.org/wiki/Stochastic_process</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
<!--
## Discrete Sample Points
- Here, $M_i$ (number of discrete sample points for the $i$th curve) is an integer which can be non-random and common to several $X^{(i)}$, or an independent draw from some positive integer random variable drawn independently of $X$
- The $T^{(i)}_m$ are the measurement times for $X^{(i)}$, which can be non-random, or be randomly drawn from some distribution, independently of $X$ and the $M_i$
- The case where $T^{(i)}_m$ are the same for several $X^{(i)}$, and implicitly the $M_i$ are the same too, is the so-called "common design" case
- The case where the $T^{(i)}_m$ are random is the so-called "independent design" case (our main focus lies here)
::: {.notes}
- For the theory, we have to let the number of curves $N$ and the expectation of $M_i$ increase, so the data is more like a triangular array (at least for the theory)
:::
-->
</section>
<section id="measurement-errors-and-design" class="slide level2 center">
<h2>Measurement Errors and Design</h2>
<ul>
<li class="fragment"><p>The <span class="math inline">\(\varepsilon^{(i)}_m\)</span> are measurement errors, and we allow <span class="math display">\[\begin{equation*}
\varepsilon^{(i)}_m = \sigma(T^{(i)}_m) e^{(i)}_m, \quad 1\leq m \leq M_i
\end{equation*}\]</span></p></li>
<li class="fragment"><p>The <span class="math inline">\(e^{(i)}_m\)</span> are independent copies of a centred variable <span class="math inline">\(e\)</span> with unit variance, and <span class="math inline">\(\sigma(T^{(i)}_m)\)</span> is some unknown bounded function which accounts for possibly heteroscedastic measurement errors</p></li>
<li class="fragment"><p>Our approach applies to both <em>independent design</em> and <em>common design</em> cases</p></li>
<li class="fragment"><p>Relative to the number of curves, <span class="math inline">\(N\)</span>, the number of points per curve, <span class="math inline">\(M_i\)</span>, may be small (“sparse”) or large (“dense”)</p></li>
</ul>
<!--
## Estimation Grid, Batch/Online
- Let $\mathcal T_0\subset [0,1]$ be a set of points of interest
- Typically, $\mathcal T_0$ is a refined grid of equidistant points
- We wish to estimate the following functions:
- $\mu(\cdot)$, $\sigma(\cdot)$, $f_T(\cdot)$ on $\mathcal T_0$
- $\Gamma(\cdot,\cdot)$ on $\mathcal T_0 \times \mathcal T_0$
- We are also concerned with computational limitations frequently encountered in this framework (we propose a recursive solution via stochastic approximation algorithms)
- We will be interested in updating estimates as new functional data arises ("online") as well as performing estimation on an existing set of curves ("batch")
-->
</section>
<section id="replication" class="slide level2 center">
<h2>Replication</h2>
<ul>
<li class="fragment"><p>One key distinguishing feature of FDA is that of “replication” (i.e., <em>common structure</em> among curves)</p></li>
<li class="fragment"><p>Essentially, there is prior information in the <span class="math inline">\(N-1\)</span> sample curves that can be exploited to learn about the <span class="math inline">\(N\)</span>th, which is <em>not</em> available in, say, classical regression analysis</p></li>
<li class="fragment"><p>This common structure can be exploited for a variety of purposes</p></li>
<li class="fragment"><p>For instance, it will allow us to obtain estimates of the <em>regularity</em> of the curves that may vary across their domain <span class="math inline">\(t\in[0,1]\)</span> (i.e., <em>local regularity</em> estimates)</p></li>
<li class="fragment"><p>This would not be possible in the classical nonparametric setting where we are restricted to a single curve only</p></li>
</ul>
</section></section>
<section>
<section id="local-regularity" class="title-slide slide level1 center">
<h1>Local Regularity</h1>
<aside class="notes">
<p>Q <em>Can you provide a definition of “irregular function”?</em></p>
<ul>
<li><p>“It’s our meaning.”</p></li>
<li><p>“If the function is non-differentiable, and it’s regularity (in the sense of Hölder continuity, where the exponent gives the regularity) can vary, we call that situation an irregular one”</p></li>
<li><p>“I do not have a definition of irregular, because”regular” is too vague.”</p></li>
<li><p>“It could be differentiable, Lipschitz continuous, Hölder continuous, analytic, etc, etc”</p></li>
<li><p>“Irregular is perhaps inappropriate for saying that the curves are not differentiable. <strong>Non-smooth is better</strong>.”</p></li>
<li><p>“But here we have the other aspect: the Hölder exponent H could vary. This means that <strong>we allow different degrees of non smoothness in different points</strong>. I think we may also call such process irregular. We then have to look at the local non-smoothness, and this is what we do because H is estimated in any point t.”</p></li>
<li><p>“So in some sense it is both non smooth and irregular.”</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="overview" class="slide level2 center">
<h2>Overview</h2>
<ul>
<li class="fragment"><p>“<strong>Smoothness</strong> […] such as the <strong>existence of continuous second derivatives</strong>, is <strong>often imposed for regularization</strong> and is especially useful if nonparametric smoothing techniques are employed, as is prevalent in FDA” <span class="citation" data-cites="wang_chiou_muller:2016">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Wang, Chiou, and Müller 2016</a>)</span></p></li>
<li class="fragment"><p>This is problematic since imposing an unknown (and global) degree of smoothness may be incompatible with the underlying stochastic process</p></li>
<li class="fragment"><p>A key feature of our approach is its data-driven locally adaptive nature</p></li>
<li class="fragment"><p>We consider a meaningful regularity concept for the data generating process based on probability theory</p></li>
<li class="fragment"><p>We propose simple estimates for <em>local regularity</em> and link process regularity to sample path regularity</p></li>
</ul>
<aside class="notes">
<ul>
<li><p>“[w]e make the assumption that the underlying process generating the data is smooth. The observed data are subject to measurement error that may mask this smoothness” <span class="citation" data-cites="levitin_et_al:2007">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Levitin et al. 2007</a>, pg. 137-138)</span></p></li>
<li><p>“The assumption that a certain number of derivatives exist has been used in most of the analyses that we have considered. In this way we stabilize estimated principal components, regression functions, monotone transformations, canonical weight functions, and linear differential operators.</p>
<p>Are there more general concepts of regularity that would aid FDA?” <span class="citation" data-cites="ramsay_silverman:2005">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Ramsay and Silverman 2005</a>, pg. 380)</span></p></li>
<li><p><em>minimax optimal rates for mean and covariance functions</em>, <span class="citation" data-cites="lepski_mammen_spokoiny:1997 cai_yuan:2011">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Lepski, Mammen, and Spokoiny 1997</a>; <a href="#/references-scrollable" role="doc-biblioref" onclick="">Cai and Yuan 2011</a>)</span>, Cai & Yuam (2010, 11, 12) (see minute 23 of zoom video feb 28 2023)</p></li>
<li><p>An estimator (estimation rule) is called <strong>minimax</strong> if its <strong>maximal risk is minimal</strong> among all estimators. In a sense this means that it is an estimator which performs best in the worst possible case allowed in the problem</p></li>
<li><p><em>adaptive confidence bands for nonparametric regression</em> <span class="citation" data-cites="cai_low_ma:2014">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Cai, Low, and Ma 2014</a>)</span> “Ideally, an adaptive confidence band should have its size automatically adjusted to the smoothness of the underlying function, while maintaining a prespecified coverage probability. However as we shall show such a goal is impossible even for Lipschitz function classes and hence a new framework for investigating adaptive confidence bands is needed.”</p></li>
<li><p><em>decrease of the eigenvalues of the covariance operator, which impacts optimal rates for the regression</em>, for instance <span class="citation" data-cites="belhakem_et_al:2021 hall_horowitz:2007">(<a href="#/references-scrollable" role="doc-biblioref" onclick="">Belhakem et al. 2021</a>; <a href="#/references-scrollable" role="doc-biblioref" onclick="">Hall and Horowitz 2007</a>)</span> (people say “suppose we know the rate of decrease of the eigenvalues of the covariance matrix operator of the covariates, <span class="math inline">\(\alpha\)</span>, which are functional in this case, but this rate is exactly <span class="math inline">\(2H+1\)</span>, so if we know <span class="math inline">\(H\)</span> we know the rate - it is like saying we know there are two derivatives, we know the <span class="math inline">\(\alpha\)</span>… how do they know that? They don’t!”) (minute 30 in the zoom video)</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="definition" class="slide level2 center">
<h2>Definition</h2>
<ul>
<li class="fragment"><p>A key element of our approach is “local regularity”, which here is the largest order <em>fractional derivative</em> admitted by the sample paths of <span class="math inline">\(X\)</span> as measured by the value of <span class="math inline">\(H_t\)</span>, the “local Hölder exponent”, which may vary with <span class="math inline">\(t\)</span></p></li>
<li class="fragment"><p>More precisely, here “local regularity” is the largest value <span class="math inline">\(H_t\)</span> for which, uniformly with respect to <span class="math inline">\(u\)</span> and <span class="math inline">\(v\)</span> in a neighborhood of <span class="math inline">\(t\)</span>, the second order moment of <span class="math inline">\((X_u-X_v)/|u-v|^{H_t}\)</span> is finite</p></li>
<li class="fragment"><p>We can then assume <span class="math display">\[\begin{equation*}
\mathbb{E}\left[(X_u-X_v)^2\right]\approx L_t^2|u-v|^{2H_t}
\end{equation*}\]</span> when <span class="math inline">\(u\)</span> and <span class="math inline">\(v\)</span> lie in a neighborhood of <span class="math inline">\(t\)</span></p></li>
<li class="fragment"><p>If a function is smooth (i.e., continuously differentiable), then <span class="math inline">\(H_t=1\)</span>, otherwise the function is non-smooth with <span class="math inline">\(0<H_t<1\)</span></p></li>
<li class="fragment"><p>If a function is a constant function then <span class="math inline">\(L_t=0\)</span>, otherwise <span class="math inline">\(L_t>0\)</span></p></li>
</ul>
<aside class="notes">
<ul>
<li><p>The second order moment of the <strong>increments</strong></p></li>
<li><p>People may say “for a Hölder condition we have inequality, but why do you remove the inequality condition and replace with equality” and the response is “because you need equality in order to get estimates for <span class="math inline">\(H\)</span> and <span class="math inline">\(L\)</span> (if no equality, forget it), but the <em>good news</em> is that almost all the processes you find in all the probability books do satisfy this with <em>almost equal</em> so in fact there is no loss in generality (e.g., derivatives, squares, log(1+…) for gaussian processes do satisfy this with equality) so this is not restrictive at all</p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="definition-1" class="slide level2 center">
<h2>Definition</h2>
<ul>
<li class="fragment"><p>Let <span class="math inline">\([t-\Delta_*/2, t + \Delta_*/2] \cap [0,1]\)</span>, and define <span class="math display">\[\begin{align*}
\theta(u,v) &= \mathbb{E}\left[ (X_u-X_v)^{2} \right],
\quad\text{ hence }\\
\theta(u,v) &\approx L_t^2 |u-v|^{2H_t} \quad \text{if } |u-v| \text{ is small and close to }t
\end{align*}\]</span></p></li>
<li class="fragment"><p>Letting <span class="math inline">\(\Delta_* =2^{-1}e^{-\log(\bar M_i)^{1/3}}>0\)</span>, <span class="math inline">\(t_1=t-\Delta_*/2\)</span>, <span class="math inline">\(t_3= t + \Delta_*/2\)</span>, and <span class="math inline">\(t_2=(t_1+t_3)/2\)</span> (the definition of <span class="math inline">\(t_1\)</span> and <span class="math inline">\(t_3\)</span> is adjusted in boundary regions), then we show that <span class="math display">\[\begin{equation*}
H_t \approx \frac{\log(\theta(t_1,t_3)) - \log(\theta(t_1,t_2))}{2\log(2)} \quad \text{if } |t_3-t_1| \text{ is small}
\end{equation*}\]</span></p></li>
<li class="fragment"><p>Moreover, <span class="math display">\[\begin{equation*}
L_t \approx \frac{\sqrt{\theta(t_1,t_3)}}{|t_1-t_3|^{H_t} } \quad \text{if } |t_3-t_1| \text{ is small}
\end{equation*}\]</span></p></li>
</ul>
<aside class="notes">
<ul>
<li><p><span class="math inline">\(\Delta_*\)</span> should go to 0</p></li>
<li><p><span class="math inline">\(\Delta_*\)</span> to some power should be larger than the pre-smoothing error (pre-smoothing error should be negligible to <span class="math inline">\(\Delta\)</span> to some power)</p></li>
<li><p>Rate of convergence of pre-smoothing is a power of <span class="math inline">\(1/N\)</span> (power of, say, <span class="math inline">\(2/5\)</span> if <span class="math inline">\(2\)</span> derivatives are assumed to exist, etc.) so we need a power that is a polynomial larger than <span class="math inline">\(1/N\)</span> which is achieved by an exponential -log() with a power that is smaller than 1 (explains why <span class="math inline">\(\Delta_*\)</span> is of this form) should be going to zero slower than any polynomial 1/N</p></li>
<li><p><span class="math inline">\(\Delta_*\)</span> should be negligible with respect to rate of convergence with respect to <span class="math inline">\(h\)</span></p></li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section>
<section id="estimation" class="slide level2 center">
<h2>Estimation</h2>
<ul>
<li class="fragment"><p>The idea is to estimate <span class="math inline">\(\theta(t_1,t_3)\)</span> and <span class="math inline">\(\theta(t_1,t_2)\)</span> <em>averaging vertically</em> over curves</p></li>
<li class="fragment"><p>Given estimates <span class="math inline">\(\widehat\theta(t_1,t_3)\)</span> and <span class="math inline">\(\widehat\theta(t_1,t_2)\)</span>, the estimators of <span class="math inline">\(H_{t}\)</span> and <span class="math inline">\(L_t\)</span> are given by <span class="math display">\[\begin{equation*}
\widehat H_t = \frac{\log(\widehat\theta(t_1,t_3)) - \log(\widehat\theta(t_1,t_2))}{2\log(2)},\quad
\widehat L_t = \frac{\sqrt{\widehat\theta(t_1,t_3)}}{|t_1-t_3|^{\widehat H_t} }
\end{equation*}\]</span></p></li>
<li class="fragment"><p>The estimator <span class="math inline">\(\widehat{\theta}(t_l,t_j)\)</span> is the average of local curve smoothers, i.e., <span class="math display">\[\begin{equation*}
\widehat{\theta}(t_l,t_j)=\frac{1}{N}\sum_{i = 1}^N\left(\widetilde X^{(i)}(t_l)-\widetilde X^{(i)}(t_j)\right)^2
\end{equation*}\]</span></p></li>
<li class="fragment"><p>The smoother <span class="math inline">\(\widetilde X^{(i)}(t)\)</span> depends on a bandwidth <span class="math inline">\(h_t\)</span> that, post-iteration, adapts to the local regularity of the underlying process</p></li>
</ul>
<!--
## Nonparametric Smoother
- To construct $\widehat H_t$ and $\widehat L_t$ on the previous slide, we use a generic kernel-based smoother on *de-meaned* $Y^{(i)}_m$ given by \begin{equation*}
\widetilde X^{(i)}(t) = \sum_{m=1}^{M_i} W_{m}^{(i)}(t;h_t) \left(Y^{(i)}_m -\widetilde\mu(T^{(i)}_m)\right),\, \sum_{m=1}^{M_i}W_{m}^{(i)}(t;h_t) =1
\end{equation*}
- The weights $W_{m}^{(i)}(t;h_t)$ are functions of the elements in $\mathcal T_{obs}^{(i)} = \left\{ T_1^{(i)},\ldots, T_{M_i}^{(i)}\right\}$, and depend on a bandwidth $h_t$ which can vary with $t$
- The main case we have in mind is local polynomial smoothing
- In the case of non-differentiable functions (our main focus lies here), we consider the local constant NW estimator [@nadaraya:1965; @watson:1964]
## Nonparametric Smoother
- The weights of the NW estimator of $X^{(i)}(t)$ are \begin{equation*}
W_{m}^{(i)}(t;h_t) = K\left( \frac{T^{(i)}_m-t}{h_t} \right)\left[\sum_{m'=1}^{M_i} K\left( \frac{T^{(i)}_{m'}-t}{h_t} \right)\right]^{-1},
\quad 1\leq m \leq M_i
\end{equation*}
- $K(u)$ is a symmetric, non-negative, bounded kernel with support in $[-1,1]$ [e.g., @epanechnikov:1969]
- For estimating $\mu(t)$ and $\Gamma(s,t)$ we might entertain an indicator associated with the weights $W_{m}^{(i)}(t;h)$, that is \begin{equation*}
w^{(i)}(t;h) = 1 \quad \text{if} \quad \sum_{i=1}^{M_i} \mathbf 1 \left\{\left|T^{(i)}_m-t\right|\leq h\right\} >1, \quad \text{and } 0 \text{ otherwise}
\end{equation*}
- When $w^{(i)}(t;h) = 0$ we *discard* the $i$th curve *vertically* as we lack information local to $t$ (i.e., if all $W_{m}^{(i)}(t;h)=0$, where $0/0\coloneqq0$)
-->
</section></section>
<section>
<section id="assumptions" class="title-slide slide level1 center">
<h1>Assumptions</h1>
</section>
<section id="assumptions-1" class="slide level2 center">
<h2>Assumptions</h2>
<ul>
<li class="fragment"><p>The stochastic process <span class="math inline">\(X\)</span> is a random function taking values in <span class="math inline">\(L^2 (\mathcal T)\)</span>, with <span class="math inline">\(\mathbb E (\| X\|^2) <\infty\)</span></p></li>
<li class="fragment"><p>The process is <em>not</em> deterministic with all sample paths equal to a common path</p></li>
<li class="fragment"><p>The increments of the process have any moment, and the distributions of the increments are sub-Gaussian</p></li>
<li class="fragment"><p>The functions <span class="math inline">\(X^{(i)}(t)\)</span> may be nowhere differentiable</p></li>
<li class="fragment"><p>The process <span class="math inline">\(X\)</span> may be non-stationary with non-stationary increments</p></li>
<li class="fragment"><p>The measurement errors <span class="math inline">\(\varepsilon^{(i)}\)</span> may be heteroscedastic</p></li>
<li class="fragment"><p>The mean function <span class="math inline">\(\mu(t)\)</span> may be smoother than the <span class="math inline">\(X^{(i)}(t)\)</span> functions</p></li>
<li class="fragment"><p><span class="math inline">\(0<L_t<\infty\)</span> and <span class="math inline">\(0<H_t<1\)</span></p></li>
</ul>
<aside class="notes">
<ul>
<li>In probability theory, a sub-Gaussian distribution is a probability distribution with strong tail decay. Informally, the tails of a sub-Gaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian</li>
</ul>
<style type="text/css">
span.MJX_Assistive_MathML {
position:absolute!important;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0 0 0!important;
border: 0!important;
height: 1px!important;
width: 1px!important;
overflow: hidden!important;
display:block!important;
}</style></aside>
</section></section>
<section>
<section id="estimation-of-local-regularity" class="title-slide slide level1 center">
<h1>Estimation of Local Regularity</h1>
</section>
<section id="methodology" class="slide level2 center">
<h2>Methodology</h2>
<ul>
<li class="fragment"><p>We take the optimal bandwidth expression for <span class="math inline">\(h_t\)</span> (which depends on <span class="math inline">\(H_t\)</span> and <span class="math inline">\(L_t\)</span>) that minimizes <em>pointwise</em> MSE using a <em>general</em> squared bias term (not the usual term one gets assuming twice differentiable curves)</p></li>
<li class="fragment"><p>Then, given an initial batch of <span class="math inline">\(N\)</span> curves, we estimate <span class="math inline">\(H_t\)</span> and <span class="math inline">\(L_t\)</span> for each <span class="math inline">\(t\in \mathcal T_0\)</span>, which involves an <em>iterative plug-in</em> procedure:</p>
<ol type="1">
<li class="fragment"><p>begin with some starting values for the local bandwidths <span class="math inline">\(h_t\)</span></p></li>
<li class="fragment"><p>construct preliminary estimates of each curve for every <span class="math inline">\(t\in \mathcal T_0\)</span> using the data pairs <span class="math inline">\((Y^{(i)}_m , T^{(i)}_m)\)</span> and local bandwidth starting values</p></li>
<li class="fragment"><p>use these preliminary curve estimates to get starting values for <span class="math inline">\(H_t\)</span> and <span class="math inline">\(L_t\)</span> for every <span class="math inline">\(t\in \mathcal T_0\)</span>, and plug these into the optimal bandwidth expression</p></li>
<li class="fragment"><p>repeat 1-3 using the updated plug-in bandwidths; continue iterating <span class="math inline">\(H_T\)</span>, and <span class="math inline">\(L_t\)</span> and <span class="math inline">\(h_t\)</span> for every <span class="math inline">\(t\in \mathcal T_0\)</span> until the procedure stabilizes (this occurs quite quickly, typically after 10 or so iterations)</p></li>
</ol></li>
</ul>
</section>
<section id="details" class="slide level2 center">
<h2>Details</h2>
<ul>
<li class="fragment"><p>To estimate the <span class="math inline">\(i\)</span>th curve at a point <span class="math inline">\(t\)</span> with local Hölder exponent <span class="math inline">\(H_t\)</span> and local Hölder constant <span class="math inline">\(L_t\)</span>, the MSE-optimal bandwidth <span class="math inline">\(h^*_{t,HL}\)</span> is <span class="math display">\[\begin{equation*}
h^*_{t,HL} = \left[ \frac{\sigma_t^2 \int K^2(u)du }{2H_t L_t^2\times \int |u|^{2H_t}|K(u)|du\times f_T(t)}\times \frac{1}{\bar M_i} \right]^{\frac{1}{2H_t+1}}
\end{equation*}\]</span></p></li>
<li class="fragment"><p>The kernel function <span class="math inline">\(K(u)\)</span> is provided by the user hence <span class="math inline">\(\int K^2(u)du\)</span> and <span class="math inline">\(\int |u|^{2H_t}|K(u)|du\)</span> can be computed given <span class="math inline">\(H_t\)</span></p></li>
<li class="fragment"><p><span class="math inline">\(\sigma_t^2\)</span> is estimated using one-half the squared differences of the two closest <span class="math inline">\(Y^{(i)}\)</span> observations at <span class="math inline">\(t\)</span>, averaged across all curves</p></li>
<li class="fragment"><p>The design density <span class="math inline">\(f_T(t)\)</span> is straightforward to estimate</p></li>
<li class="fragment"><p>We estimate <span class="math inline">\(H_t\)</span> and <span class="math inline">\(L_t\)</span> for some batch of <span class="math inline">\(N\)</span> curves as outlined on the previous slide, then we recursively update them as online data arrives</p></li>
</ul>
</section>
<section id="mfbm-example-independent-design" class="slide level2 center">
<h2>MfBm Example (independent design)</h2>