-
Notifications
You must be signed in to change notification settings - Fork 173
/
foundations-of-inference.qmd
11 lines (8 loc) · 1.8 KB
/
foundations-of-inference.qmd
1
2
3
4
5
6
7
8
9
10
11
# Foundations of inference {#sec-foundations .unnumbered}
Among the key concepts in statistics is making conclusions about a population using information in a sample; the process is called statistical inference. By using computational methods as well as well-developed mathematical theory, we can understand how one dataset differs from a different dataset — even when the two datasets have been collected under identical settings. In this part, we will walk through the key concepts and terms which will be applied more explicitly in later chapters.
- [Chapter -@sec-foundations-randomization] describes randomization which involves repeatedly permuting observations to represent scenarios in which there is no association between two variables of interest.
- [Chapter -@sec-foundations-bootstrapping] describes bootstrapping which involves repeatedly sampling (with replacement) from the observed data in order to produce many samples which are similar to, but different from, the original data.
- [Chapter -@sec-foundations-mathematical] introduces the Central Limit Theorem which is a theoretical mathematical approximation to the variability in data seen through randomization and bootstrapping.
- In [Chapter -@sec-foundations-decision-errors] you will be presented with a structure for describing when and how errors can happen within statistical inference.
- [Chapter -@sec-foundations-applications] includes an application on the Malaria vaccine case study where the topics from this part of the book are fully developed.
Although often computational and mathematical methods are both appropriate (and give similar results), your study of both approaches should convince you that (1) there is almost never a single "correct" approach, and (2) there are different ways to quantify the variability seen from dataset to dataset.