-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.xml
executable file
·264 lines (264 loc) · 31.6 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>SUSTech BIO210 Biostatistics</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/</link>
<description>Recent content on SUSTech BIO210 Biostatistics</description>
<generator>Hugo -- gohugo.io</generator>
<language>en-gb</language>
<lastBuildDate>Wed, 22 May 2024 00:10:00 +0800</lastBuildDate>
<atom:link href="https://dbrg77.github.io/SUSTech-BIO210/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Lecture 34 Chi-squared Tests For Categorical Data</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-34/</link>
<pubDate>Wed, 22 May 2024 00:10:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-34/</guid>
<description>In this lecture, we will introduce the Chi-square goodness-of-fit test. It is useful when we want to compare some categorical distributions observed from the data to a theoretical distribution. We will go through the intuition, the logic and the use case of it. We are going to revisit what you learnt during high school: the Mendel&rsquo;s Plant Hybridisation experiments. You will be introduced to a very famous debate in the history of science.</description>
</item>
<item>
<title>Lecture 33 One-way ANOVA Examples</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-33/</link>
<pubDate>Wed, 22 May 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-33/</guid>
<description>In the previous two lectures, we talked about the idea of ANOVA. We see why it is called ANOVA when it is actually used to compare means. As you have already seen, the calculation involved in doing an ANOVA test, even for only three groups, is quite daunting, especially when you want to perform post hoc tests. Therefore, you should always use a statistical software to perform this kind of test in practice.</description>
</item>
<item>
<title>Lecture 32 ANOVA & Post hoc Multiple Comparisons</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-32/</link>
<pubDate>Fri, 17 May 2024 00:10:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-32/</guid>
<description>Once we get familiar with the idea of ANOVA and see why we use it to compare means from more than 2 populations, there are still some technical details that we need to go through.
Probably the most obvious and important question is: what should I do if I reject the null hypothesis using ANOVA? We do post hoc tests in certain ways to control the experimental error rate.
We will elaborate more during the lecture.</description>
</item>
<item>
<title>Lecture 31 Analysis of Variance (ANOVA)</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-31/</link>
<pubDate>Fri, 17 May 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-31/</guid>
<description>In our real life, there are many situations where we need to compare the means from many groups ($\geqslant 3$), representing many different populations. How do we do this? Intuitively, we just perform t-tests for every pair of them. This is actually not a bad idea when the number of groups ($k$) is small. However, when $k$ becomes large, we have to perform many tests. If you remember the content from the previous lecture, you should understand that this actually increases our chances of making type I errors.</description>
</item>
<item>
<title>Lecture 30 The Behaviour of The p-value</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-30/</link>
<pubDate>Fri, 10 May 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-30/</guid>
<description>There is not much to say in this post, because Lecture 30 is more like a practical session where we demonstrate how p-values behave if we perform a lot of hypothesis testing. We will do some live coding during the lecture.
References Examples of misinterpretation of p-values </description>
</item>
<item>
<title>Lecture 29 Compare Two Population Variances</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-29/</link>
<pubDate>Fri, 10 May 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-29/</guid>
<description>In this lecture, we will introduce one method for comparing two population variances. In this case, we have two samples, independently drawn from two populations. Based on the sample variances, we want to investigate if the population variances are equal or not. Like said before, we cannot assess this problem directly. Instead, we approach this problem in an indirect way using the p-value. That is, if wa assume that there is no difference between the two variances from the two populations, what would be the probability of observing the data we have or more extreme?</description>
</item>
<item>
<title>Lecture 28 Compare Two Population Means</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-28/</link>
<pubDate>Wed, 08 May 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-28/</guid>
<description>Similar to the previous lecture where we talked about the two-sample test regarding the proportions, we are going to the same notation for the population parameter representing the difference of the population means:
$$ \delta = \mu_1 -\mu_2 $$
In the same way, our null and alternative hypotheses for a two-sided test are:
$$ \boldsymbol{H_0:} \ \;\delta = \mu_1 - \mu_2 = 0 \\ \boldsymbol{H_1:} \ \; \delta = \mu_1 - \mu_2 \neq 0 $$</description>
</item>
<item>
<title>Lecture 27 Compare Two Population Proportions</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-27/</link>
<pubDate>Wed, 08 May 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-27/</guid>
<description>So far we have talked a lot about doing hypothesis testing for a particular population parameter, such as the population proportion $\pi$, the population mean $\mu$ and the population variance $\sigma^2$. We take one random sample from the population, and test if the population parameter is equal to or greater/less than a specific value. This is called a one-sample test. Most of the time the specific value comes from our previous knowledge about the population.</description>
</item>
<item>
<title>Lecture 26 Error, Power And Sample Size Estimation</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-26/</link>
<pubDate>Fri, 26 Apr 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-26/</guid>
<description>With all the examples we have gone through, you can see that we always set the significance level $\alpha$ to be a very small number, with $0.05$ being the most frequently used one.However, it is never 0. The consequence is that there is still a chance we could draw the wrong conclusion when we make a decision based on the p-value. When that occurs, we make errors.
Whenever we make a decision, there are only two possible outcomes: either we reject $H_0$ or we do not reject $H_0$.</description>
</item>
<item>
<title>Lecture 25 More On Hypothesis Testing</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-25/</link>
<pubDate>Fri, 26 Apr 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-25/</guid>
<description>In the previous lectures, we practised the logic flow of hypothesis testing over and over again, so I hope you are now familiar with the concept of hypothesis testing. Basically, we have some data in our hands, and we come up with some hypotheses by incorporating the knowledge about certain population parameters based on the previous studies or experience. For example:
Hypotheses based on current data The proportion of people who have type B may not be 9%.</description>
</item>
<item>
<title>Lecture 24 Hypothesis Testing Terms</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-24/</link>
<pubDate>Fri, 12 Apr 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-24/</guid>
<description>In the previous lecture, we talked about the intuition and the basic idea of hypothesis testing. There are still many things need to be formalised. In addition, I&rsquo;m sure many of you had some burning questions. We dealt with them in this lecture.
Fisher vs. Neyman-Pearson Historically, there are two different types of hypothesis testing. The method developed by Ronald Fisher allows us to compute the probability of observing the data or more extreme under the null hypothesis, which is a default stand where we assume there is no difference, no effect, no association, no relationship etc.</description>
</item>
<item>
<title>Lecture 23 Introduction To Hypothesis Testing</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-23/</link>
<pubDate>Fri, 12 Apr 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-23/</guid>
<description>We have been using the property of the sampling distributions of the sample mean, sample variance and sample proportion to make interval estimations of the corresponding population parameters. They are very useful, but they are not the whole story of inferential statistics. Otherwise, the course will be much shorter.
Today, we introduced a very powerful technique: the hypothesis testing. As the name suggested, it allows us to test hypotheses. You see, we rarely know the properties of the population.</description>
</item>
<item>
<title>Lecture 22 Confidence Interval For The Proportion</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-22/</link>
<pubDate>Wed, 10 Apr 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-22/</guid>
<description>Now that we have figured out the sampling distribution of the sample proportion, we can make interval estimation and construct confidence intervals for the sample proportion. Before we do that, let&rsquo;s use an example to look the distribution of $P$ one more time.
A Simplified Version of Gene Expression As we know, DNA is our genetic material. In order for a gene to function, the gene needs to be &ldquo;converted&rdquo; into proteins1.</description>
</item>
<item>
<title>Lecture 21 Sampling Distribution of The Sample Proportion</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-21/</link>
<pubDate>Wed, 10 Apr 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-21/</guid>
<description>What we have been dealing with so far are all absolute quantities, such as the number of emails we received in a given time window and the body temperatures of healthy people. In many other situations, the relative quantities are more meaningful. In this lecture, we introduce a new population parameter: the population proportions, representing the proportion (or fraction, percentage etc.) of the thing of our interest. To be consistent with the notations we are using in this course, we use the Greek letter $\pi$ to denote the population proportion.</description>
</item>
<item>
<title>Lecture 20 Confidence Interval For The Variance</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-20/</link>
<pubDate>Sun, 07 Apr 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-20/</guid>
<description>Once we have introduced the confidence intervals for the population mean $\mu$, the next question comes natural: how do we construct confidence intervals for the population variance $\sigma^2$? Since we have previously discussed the sampling distribution of the sample variance, it is kind of straightforward for us to derive the confidence interval for the population variance.
Interval Estimation For $\boldsymbol{\sigma^2}$ We can just follow the same thought from the previous lecture when we derived the CI for the population mean $\mu$.</description>
</item>
<item>
<title>Lecture 19 Confidence Interval For The Mean</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-19/</link>
<pubDate>Sun, 07 Apr 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-19/</guid>
<description>In the previous lectures, we talked about using the sample mean ($\bar{X}$ and $\bar{x}$) as the estimator and estimate for the population mean ($\mu$) and using the sample variance ($S^2$ and $s^2$) as the estimator and estimate for the population variance ($\sigma^2$). We also introduced one intuitive way of constructing an estimator: the maximum likelihood estimation (MLE). In those cases, we were just using one specific value, that is, just one number, to estimate the population parameter of interest.</description>
</item>
<item>
<title>Lecture 18 The Error Curve Derived By MLE</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-18/</link>
<pubDate>Fri, 29 Mar 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-18/</guid>
<description>Ronald Fisher introduced the method of using maximum likelihood estimation to estimate the parameters of a distribution or population. However, it was Carl Friedrich Gauss who first developed the idea of maximum likelihood. Actually, the normal PDF can be derived using the idea of maximum likelihood. We will investigate in this lecture.
Like mentioned in Lecture 12, the first question everybody has when they first encountered the normal PDF is: where does it come from?</description>
</item>
<item>
<title>Lecture 17 Maximum Likelihood Estimation (MLE)</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-17/</link>
<pubDate>Fri, 29 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-17/</guid>
<description>Previously we all assumed that we somehow knew the population parameters, such as the mean ($\mu$) and the variance ($\sigma^2$). Starting this lecture, we have finally reached a stage where we do NOT know the population parameters. Instead, we have a representative sample. Using the information from the sample, we &ldquo;make a guess&rdquo;, or estimate the parameters of the population.
In the simplest case, if we are interested in the population mean $\mu$ or variance $\sigma^2$, we can just provide a number to represent our best &ldquo;guess&rdquo; or estimate to the mean or variance.</description>
</item>
<item>
<title>Lecture 16 Sampling Distributions of The Sample Variance</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-16/</link>
<pubDate>Wed, 27 Mar 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-16/</guid>
<description>The content in this article is a bit dry and difficult, but you only need to pay attention to the logic, not the math details1.
Now that we know the distribution of the sample mean $\bar{X}$, the next question comes naturally: what is the sampling distribution of the sample variance $S^2$? We do not really have a theorem to tell us about the distribution of the sample variance, so we have to figure it out by ourselves.</description>
</item>
<item>
<title>Lecture 15 Sampling Distributions & The Central Limit Theorem</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-15/</link>
<pubDate>Wed, 27 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-15/</guid>
<description>In this lecture, we introduced the most important two concepts in this entire course: the sampling distribution and the central limit theorem. We have seen how they help us make inferences about the population based on just one sample with limited size ($n$). They are the foundation for the hypothesis testing that we are going to introduce in the future.
Like mentioned before, in inferential statistics we would like to use the data from the sample to make inferences about certain unknown properties of the population.</description>
</item>
<item>
<title>Lecture 14 Populations And Samples</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-14/</link>
<pubDate>Fri, 22 Mar 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-14/</guid>
<description>Lecture 14 marks the start of the section of statistics, or more precisely, the inferential statistics section. In inferential statistics, we would like to use the information from the samples we get to make inference and some generalised conclusions about populations.
First, we need to clarify the concepts of population and sample. We actually already talked about them in previous lectures, even though we have not formally defined them. The concepts of populations and samples are actually difficult to describe.</description>
</item>
<item>
<title>Lecture 13 Normal Distributions</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-13/</link>
<pubDate>Fri, 22 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-13/</guid>
<description>In Lecture 13, we talked about why the normal distribution was useful and how we used it to calculate probabilities of events of our interest. This was the last lecture in the probability section. We moved on to statistics section after this lecture.
Normal (Gaussian) Distributions We have come across some basics about the normal distributions. The PDF is:
$$f_X(x)=\cfrac{1}{\sqrt{2\pi}\sigma}\,e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
Expectation &amp; Variance Like always, whenever we get a PDF or PMF, we always need to check whether it describes a valid probabilistic model.</description>
</item>
<item>
<title>Lecture 12 Continuous Random Variables</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-12/</link>
<pubDate>Fri, 15 Mar 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-12/</guid>
<description>Once we have seen some discrete random variables, we can move on to look at continuous random variables. As the name indicated, the numbers taken by a continuous random variable are continuous. Like we discussed before, in this case the probability of taking any specific number is zero. Therefore, it does NOT make sense to use PMF to describe a continuous random variable. Instead, we use probability density function (PDF), denoted by $f_X(x)$.</description>
</item>
<item>
<title>Lecture 11 Discrete Random Variables</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-11/</link>
<pubDate>Fri, 15 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-11/</guid>
<description>Now we have introduced the concept of random variables, let&rsquo;s look at some common random variables, starting with the easier cases of discrete random variables. They are common because we can easily relate them to our real-life events. Once we get familiar with them, we can use them to model and solve problems.
Bernoulli Random Variables Like I said before, when we first come across a new thing, always ALWAYS start with simple examples to get our intuition right.</description>
</item>
<item>
<title>Lecture 10 Random Variables</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-10/</link>
<pubDate>Wed, 13 Mar 2024 00:10:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-10/</guid>
<description>In this lecture, we introduced perhaps the first &ldquo;new&rdquo;1 concept in this course: random variables. It is very important to understand the concepts of random variables and the expectation and variance of a random variable. When coming across something new, always start with something simple to get a rough idea. Therefore, we used some simple and intuitive examples, such as coin tossing, during the lectures.
Random Variables A random variable is defined as a a real-valued function defined on a sample space $\Omega$.</description>
</item>
<item>
<title>Lecture 9 Counting</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-09/</link>
<pubDate>Wed, 13 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-09/</guid>
<description>In Lecture 9, we paused a little bit and did a quick recap about the basic counting principles. The content was basically what you have learnt during your high school math classes but in English. You see that the counting principles we are talking about greatly help us calculate the probability of events consisting outcomes from discrete sample spaces.
You might ask why we bother to do a recap about counting.</description>
</item>
<item>
<title>Lecture 8 Independent Events</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-08/</link>
<pubDate>Fri, 08 Mar 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-08/</guid>
<description>Lecture 8 introduced the concept of independence. It is a relatively straightforward concept to understand. Intuitively, events $A$ and $B$ are independent if:
$$\mathbb{P}(A|B)=\mathbb{P}(A), \mathbb{P}(B) \neq 0$$
Using Bayes&rsquo; terms, we can say that the posterior probability is equal to the prior probability. In other words, event $B$ has nothing to do with $A$; event $B$ does not provide any information about event $A$; given that event $B$ has occurred, we do not change our belief about event $A$ &hellip; Recall that from Lecture 5, we have:</description>
</item>
<item>
<title>Lecture 7 More On The Bayes Theorem</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-07/</link>
<pubDate>Fri, 08 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-07/</guid>
<description>Lecture 7 introduced more examples of using the Bayes&rsquo; theorem to help us make decisions in real life. Sometimes, the conclusion seems counterintuitive at the beginning. However, if you grasp the idea of &ldquo;using the new information to update your prior belief&rdquo;, you will find the Bayes&rsquo;s rule very natural. We also introduced a new form of the Bayes&rsquo; rule without talking about the events $A$, $B$ etc.:
$$\mathbb{P}(H_i|E) = \cfrac{\mathbb{P}(H_i)\mathbb{P}(E|H_i)}{\mathbb{P}(E)} = \cfrac{\mathbb{P}(E|H_i)}{\sum_{i=1}^n \mathbb{P}(H_i)\mathbb{P}(E|H_i)} \cdot \mathbb{P}(H_i)$$</description>
</item>
<item>
<title>Lecture 6 The Bayes' Theorem</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-06/</link>
<pubDate>Fri, 01 Mar 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-06/</guid>
<description>Now we have get familiar with conditional probabilities. The basic definition and formula is:
$$\mathbb{P}(A|B)=\cfrac{\mathbb{P}(A \cap B)}{\mathbb{P}(B)} \textmd{, } \mathbb{P}(B) \neq 0$$
In the previous lecture, we used some examples to show how to calculate three probabilities: $\mathbb{P}(A \cap B)$, $\mathbb{P}(B)$ and $\mathbb{P}(A|B)$, which are essentially the three terms in the above formula. In reality, we sometimes need to calculate some very complicated probabilities, but they all boils down to some sort of combinations of those three probabilities.</description>
</item>
<item>
<title>Lecture 5 Conditional Probability</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-05/</link>
<pubDate>Fri, 01 Mar 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-05/</guid>
<description>In Lectures 5, we have introduced the concept of conditional probability and see how we could use it to learn from the real world. Whenever we come across a new concept, the first thing is always to build up our intuition about it. Using a few simple examples, we start to get some ideas about conditional probabilities.
In conditional probability, we are interested in calculating the probability of an event when taking into account some knowledge that we have.</description>
</item>
<item>
<title>Lecture 4 Probability Axioms</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-04/</link>
<pubDate>Wed, 28 Feb 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-04/</guid>
<description>Probability Axioms After we finished the content of descriptive statistics, we moved on to the section of probability, which is really good at dealing with randomness. You have probably already come across some concepts of probability. In Lecture 4, we just made things formal. Probability axioms were introduced in this lecture. Those are the things we need to agree on. They are quite intuitive, and hopefully, you have no problem accepting them as facts:</description>
</item>
<item>
<title>Lecture 3 Numerical Measures</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-03/</link>
<pubDate>Wed, 28 Feb 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-03/</guid>
<description>Numerical Measures Lecture 3 concludes the section of descriptive statistics. From the previous lectures we demonstrated that graphs were a nice way of showing the data. One can immediately get a rough idea about how the data look like from a good graph. However, when the data is presented in graphs, the quantitative information in the data is lost. Therefore, it would be good if we could just use a bunch of numbers to summarise the data.</description>
</item>
<item>
<title>Lecture 2 Descriptive Statistics</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-02/</link>
<pubDate>Fri, 23 Feb 2024 00:00:10 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-02/</guid>
<description>Lecture 2 started to introduce some basic techniques in descriptive statistics. You can think of this lecture as a recap of what you have already known in the past. Make sure you get familiar with all the graphs introduced in this lecture, because we are going to use them all the time in future lectures. Perhaps the most important graph in this course is the histogram, which is the most common way of showing frequency distributions:</description>
</item>
<item>
<title>Lecture 1 Course Introduction</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-01/</link>
<pubDate>Fri, 23 Feb 2024 00:00:00 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/posts/lecture-01/</guid>
<description>In the first lecture of the entire course, we had a brief overview of the course structure and content. Lecture 1 started with some administrative aspects of the course. Then we provided an introduction about what is and is NOT included in the course, what to expect from the course and the difference between BIO210 and MA212.
Your final grade is a weighted average 1 of the following:
Attendance Homework Mid-term Final exam $10\%$ $20\%$ $30\%$ $40\%$ Not much stuff in this lecture really, so you could just relax and listen.</description>
</item>
<item>
<title>List of all course content</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/course/</link>
<pubDate>Thu, 22 Feb 2024 16:01:31 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/course/</guid>
<description>All course content Lesson 1 (10-Sep-2024) Lecture slides Lecture 1 Introduction To BIO210 Lecture 2 Data Presentation Homework assignment None Extra reading material None Lesson 2 (13-Sep-2024) Lecture slides Lecture 3 Numerical Measures Lecture 4 Probability Axioms Homework assignment Assignment 1 Extra reading material None Lesson 3 (20-Sep-2024) Lecture slides Lecture 5 Conditional Probability Lecture 6 The Bayes Theorem Homework assignment None Extra reading material The Pedigree Analysis Lesson 4 (24-Sep-2024) Lecture slides Lecture 7 More On The Bayes&rsquo; Theorem Lecture 8 Independent Events Homework assignment Assignment 2 Extra reading material None Lesson 5 (27-Sep-2024) Lecture slides Lecture 9 Counting Lecture 10 Random Variable, PMF, Expectation &amp; Variance Homework assignment None Extra reading material Various Proofs About RV Lesson 6 (29-Sep-2024) Lecture slides Lecture 11 Discrete Probability Distributions Lecture 12 Continuous Probability Distributions Homework assignment None Extra reading material Mean &amp; Variance of Binomial &amp; Poisson Lesson 7 (08-Oct-2024) Lecture slides Lecture 13 Normal Distribution Lecture 14 Population And Sample Homework assignment Assignment 3 Extra reading material Useful Properties of A Normal Random Variable Calculate Probabilities Using Excel Lesson 8 (11-Oct-2024) Lecture slides Lecture 15 Sampling Distribution And The CLT Lecture 16 Sampling Distribution of The Sample Variance Homework assignment None Extra reading material None Lesson 9 (18-Oct-2024) Lecture slides Lecture 17 Maximum Likelihood Estimation Lecture 18 The Error Curve Derived By MLE Homework assignment Assignment 4 Extra reading material MLE For Variance Lesson 10 (22-Oct-2024) Lecture slides Lecture 19 Confidence Interval For The Mean Lecture 20 Confidence Interval For The Variance Homework assignment None Extra reading material The Student&rsquo;s t Distribution Calculating Probabilities Using R Lesson 11 (25-OCT-2024) Lecture slides Lecture 21 Normal Approximation To Binomial Distribution &amp; Sampling Distribution of The Sample Proportion Lecture 22 Confidence Interval For The Proportion Homework assignment None Extra reading material None Lesson 12 (1-Nov-2024) Lecture slides Lecture 23 Introduction To Hypothesis Testing Lecture 24 Hypothesis Testing Terms Homework assignment Assignment 5 Extra reading material None Lesson 13 (05-Nov-2024) Lecture slides Lecture 25 More On Hypothesis Testing Lecture 26 Error Power And Sample Size Estimation Homework assignment None Extra reading material None </description>
</item>
<item>
<title>Welcome To BIO210 Biostatistics</title>
<link>https://dbrg77.github.io/SUSTech-BIO210/about/</link>
<pubDate>Fri, 10 Feb 2023 16:01:31 +0800</pubDate>
<guid>https://dbrg77.github.io/SUSTech-BIO210/about/</guid>
<description>This static website holds all course material from the BIO210 Biostatistics course delivered by the School of Life Sciences at SUSTech, Shenzhen. This is an entry level statistics course for undergraduates who have no prior knowledge about statistics at all. We do assume you are familiar with the math from your high school and the 1st year undergraduate training.
About This Website You can find all material listed in this Content Index page, typically a few days before or after the actual lesson.</description>
</item>
</channel>
</rss>