Skip to content

Commit 69a08c1

Browse files
committed
Merge pull request #316 from agitter/overfitting-figure
[ci skip] This build is based on cc09f2b. This commit was created by the following CI build and job: https://github.com/Benjamin-Lee/deep-rules/commit/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/checks https://github.com/Benjamin-Lee/deep-rules/runs/506142207
1 parent f3fca12 commit 69a08c1

File tree

7 files changed

+232
-155
lines changed

7 files changed

+232
-155
lines changed

citations.tsv

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,7 @@ doi:10.1109/TBDATA.2016.2573280 doi:10.1109/TBDATA.2016.2573280 doi:10.1109/tbda
7878
doi:10/bjjdg2 doi:10/bjjdg2 doi:10.1016/s0893-6080(05)80131-5 AE3ehMCc
7979
tag:srivastava-dropout http://dl.acm.org/citation.cfm?id=2670313 url:http://dl.acm.org/citation.cfm?id=2670313 wgOFUxdw
8080
tag:ioffe-batchnorm https://dl.acm.org/citation.cfm?id=3045118.3045167 url:https://dl.acm.org/citation.cfm?id=3045118.3045167 4oKcgKmU
81+
doi:10.1073/pnas.1903070116 doi:10.1073/pnas.1903070116 doi:10.1073/pnas.1903070116 qCKLXDUQ
8182
arxiv:1811.12808 arxiv:1811.12808 arxiv:1811.12808 1CDx6NYSj
8283
doi:10.1162/089976698300017197 doi:10.1162/089976698300017197 doi:10.1162/089976698300017197 hJQdIoO3
8384
url:http://jmlr.csail.mit.edu/papers/v15/srivastava14a.html url:http://jmlr.csail.mit.edu/papers/v15/srivastava14a.html url:http://jmlr.csail.mit.edu/papers/v15/srivastava14a.html R1RpVu06

manuscript.html

Lines changed: 81 additions & 74 deletions
Large diffs are not rendered by default.

manuscript.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ author-meta:
2222
- Juan Jose Carmona
2323
bibliography:
2424
- content/manual-references.json
25-
date-meta: '2021-01-21'
25+
date-meta: '2021-01-23'
2626
header-includes: '<!--
2727
2828
Manubot generated metadata rendered from header-includes-template.html.
@@ -41,9 +41,9 @@ header-includes: '<!--
4141
4242
<meta property="twitter:title" content="Ten Quick Tips for Deep Learning in Biology" />
4343
44-
<meta name="dc.date" content="2021-01-21" />
44+
<meta name="dc.date" content="2021-01-23" />
4545
46-
<meta name="citation_publication_date" content="2021-01-21" />
46+
<meta name="citation_publication_date" content="2021-01-23" />
4747
4848
<meta name="dc.language" content="en-US" />
4949
@@ -217,19 +217,19 @@ header-includes: '<!--
217217
218218
<link rel="alternate" type="application/pdf" href="https://Benjamin-Lee.github.io/deep-rules/manuscript.pdf" />
219219
220-
<link rel="alternate" type="text/html" href="https://Benjamin-Lee.github.io/deep-rules/v/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b/" />
220+
<link rel="alternate" type="text/html" href="https://Benjamin-Lee.github.io/deep-rules/v/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/" />
221221
222-
<meta name="manubot_html_url_versioned" content="https://Benjamin-Lee.github.io/deep-rules/v/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b/" />
222+
<meta name="manubot_html_url_versioned" content="https://Benjamin-Lee.github.io/deep-rules/v/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/" />
223223
224-
<meta name="manubot_pdf_url_versioned" content="https://Benjamin-Lee.github.io/deep-rules/v/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b/manuscript.pdf" />
224+
<meta name="manubot_pdf_url_versioned" content="https://Benjamin-Lee.github.io/deep-rules/v/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/manuscript.pdf" />
225225
226226
<meta property="og:type" content="article" />
227227
228228
<meta property="twitter:card" content="summary_large_image" />
229229
230-
<meta property="og:image" content="https://github.com/Benjamin-Lee/deep-rules/raw/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b/content/images/thumbnail_tips_overview.png" />
230+
<meta property="og:image" content="https://github.com/Benjamin-Lee/deep-rules/raw/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/content/images/thumbnail_tips_overview.png" />
231231
232-
<meta property="twitter:image" content="https://github.com/Benjamin-Lee/deep-rules/raw/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b/content/images/thumbnail_tips_overview.png" />
232+
<meta property="twitter:image" content="https://github.com/Benjamin-Lee/deep-rules/raw/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/content/images/thumbnail_tips_overview.png" />
233233
234234
<link rel="icon" type="image/png" sizes="192x192" href="https://manubot.org/favicon-192x192.png" />
235235
@@ -258,10 +258,10 @@ title: Ten Quick Tips for Deep Learning in Biology
258258

259259
<small><em>
260260
This manuscript
261-
([permalink](https://Benjamin-Lee.github.io/deep-rules/v/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b/))
261+
([permalink](https://Benjamin-Lee.github.io/deep-rules/v/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd/))
262262
was automatically generated
263-
from [Benjamin-Lee/deep-rules@cdf8ae1](https://github.com/Benjamin-Lee/deep-rules/tree/cdf8ae16f5b10a5ef134a5f56af5fde5c173d11b)
264-
on January 21, 2021.
263+
from [Benjamin-Lee/deep-rules@cc09f2b](https://github.com/Benjamin-Lee/deep-rules/tree/cc09f2b48953fe4b7a52ad5ff955bfb0d25e62bd)
264+
on January 23, 2021.
265265
</em></small>
266266

267267
## Authors
@@ -695,8 +695,9 @@ In other words, the model fits patterns that are overly specific to the data it
695695
This subtle distinction is made clearer by seeing what happens when a model is tested on data to which it was not exposed during training: just as a student who memorizes exam materials struggles to correctly answer questions for which they have not studied, a machine learning model that has overfit to its training data will perform poorly on unseen test data.
696696
Deep learning models are particularly susceptible to overfitting due to their relatively large number of parameters and associated representational capacity.
697697
Just as some students may have greater potential for memorization, deep learning models seem more prone to overfitting than machine learning models with fewer parameters.
698+
However, having a large number of parameters does not always imply that a neural network will overfit [@doi:10.1073/pnas.1903070116].
698699

699-
![A visual example of overfitting and failure to generalize. While a high-degree polynomial achieves high accuracy on its training data, it performs poorly on data with specificities that have not been seen before. That is, the model has learned the training dataset specifically rather than learning a generalizable pattern that represents data of this type. In contrast, a simple linear regression works well on both datasets. The greater representational capacity of the polynomial is analogous to using a larger or deeper neural network.](images/overfitting.png){#fig:overfitting-fig}
700+
![A visual example of overfitting and failure to generalize. While a high-degree polynomial achieves high accuracy on its training data, it performs poorly on the test data that have not been seen before. That is, the model has memorized the training dataset specifically rather than learning a generalizable pattern that represents data of this type. In contrast, a simple linear regression works equally well on both datasets.](images/overfitting.png){#fig:overfitting-fig}
700701

701702
In general, one of the most effective ways to combat overfitting is to detect it in the first place.
702703
One way to do this is to split the main dataset being worked on into three independent parts: a training set, a tuning set (also commonly called a validation set in the machine learning literature), and a test set.

manuscript.pdf

14.4 KB
Binary file not shown.

references.json

Lines changed: 101 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -2920,6 +2920,49 @@
29202920
"URL": "https://doi.org/ghfwxq",
29212921
"note": "This CSL JSON Item was automatically generated by Manubot v0.4.1 using citation-by-identifier.\nstandard_id: doi:10.1038/s42256-020-0218-x"
29222922
},
2923+
{
2924+
"type": "article-journal",
2925+
"id": "qCKLXDUQ",
2926+
"author": [
2927+
{
2928+
"family": "Belkin",
2929+
"given": "Mikhail"
2930+
},
2931+
{
2932+
"family": "Hsu",
2933+
"given": "Daniel"
2934+
},
2935+
{
2936+
"family": "Ma",
2937+
"given": "Siyuan"
2938+
},
2939+
{
2940+
"family": "Mandal",
2941+
"given": "Soumik"
2942+
}
2943+
],
2944+
"issued": {
2945+
"date-parts": [
2946+
[
2947+
2019,
2948+
8,
2949+
6
2950+
]
2951+
]
2952+
},
2953+
"abstract": "Breakthroughs in machine learning are rapidly changing science and society, yet our fundamental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the field, the bias–variance trade-off, appears to be at odds with the observed behavior of methods used in modern machine-learning practice. The bias–variance trade-off implies that a model should balance underfitting and overfitting: Rich enough to express underlying structure in data and simple enough to avoid fitting spurious patterns. However, in modern practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate) the data. Classically, such models would be considered overfitted, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners. In this paper, we reconcile the classical understanding and the modern practice within a unified performance curve. This “double-descent” curve subsumes the textbook U-shaped bias–variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance. We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence. This connection between the performance and the structure of machine-learning models delineates the limits of classical analyses and has implications for both the theory and the practice of machine learning.",
2954+
"container-title": "Proceedings of the National Academy of Sciences",
2955+
"DOI": "10.1073/pnas.1903070116",
2956+
"volume": "116",
2957+
"issue": "32",
2958+
"page": "15849-15854",
2959+
"publisher": "Proceedings of the National Academy of Sciences",
2960+
"title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off",
2961+
"URL": "https://doi.org/gf5dmw",
2962+
"PMCID": "PMC6689936",
2963+
"PMID": "31341078",
2964+
"note": "This CSL JSON Item was automatically generated by Manubot v0.4.1 using citation-by-identifier.\nstandard_id: doi:10.1073/pnas.1903070116"
2965+
},
29232966
{
29242967
"type": "article-journal",
29252968
"id": "1AyQuG5x7",
@@ -2962,8 +3005,20 @@
29623005
"note": "This CSL JSON Item was automatically generated by Manubot v0.4.1 using citation-by-identifier.\nstandard_id: doi:10.1089/omi.2018.0097"
29633006
},
29643007
{
2965-
"type": "article-journal",
29663008
"id": "QobI7Hyv",
3009+
"type": "article-journal",
3010+
"title": "Correct machine learning on protein sequences: a peer-reviewing perspective",
3011+
"container-title": "Briefings in Bioinformatics",
3012+
"page": "831-840",
3013+
"volume": "17",
3014+
"issue": "5",
3015+
"source": "DOI.org (Crossref)",
3016+
"URL": "https://doi.org/f89ms7",
3017+
"DOI": "10.1093/bib/bbv082",
3018+
"ISSN": "1467-5463, 1477-4054",
3019+
"shortTitle": "Correct machine learning on protein sequences",
3020+
"journalAbbreviation": "Brief Bioinform",
3021+
"language": "en",
29673022
"author": [
29683023
{
29693024
"family": "Walsh",
@@ -2981,25 +3036,37 @@
29813036
"issued": {
29823037
"date-parts": [
29833038
[
2984-
2016,
3039+
"2016",
29853040
9
29863041
]
29873042
]
29883043
},
2989-
"container-title": "Briefings in Bioinformatics",
2990-
"DOI": "10.1093/bib/bbv082",
2991-
"volume": "17",
2992-
"issue": "5",
2993-
"page": "831-840",
2994-
"publisher": "Oxford University Press (OUP)",
2995-
"title": "Correct machine learning on protein sequences: a peer-reviewing perspective",
2996-
"URL": "https://doi.org/f89ms7",
3044+
"accessed": {
3045+
"date-parts": [
3046+
[
3047+
"2021",
3048+
1,
3049+
23
3050+
]
3051+
]
3052+
},
29973053
"PMID": "26411473",
29983054
"note": "This CSL JSON Item was automatically generated by Manubot v0.4.1 using citation-by-identifier.\nstandard_id: doi:10.1093/bib/bbv082"
29993055
},
30003056
{
3001-
"type": "article-journal",
30023057
"id": "1GGrbeMvT",
3058+
"type": "article-journal",
3059+
"title": "Correcting for experiment-specific variability in expression compendia can remove underlying signals",
3060+
"container-title": "GigaScience",
3061+
"page": "giaa117",
3062+
"volume": "9",
3063+
"issue": "11",
3064+
"source": "DOI.org (Crossref)",
3065+
"abstract": "Abstract\r\n \r\n Motivation\r\n In the past two decades, scientists in different laboratories have assayed gene expression from millions of samples. These experiments can be combined into compendia and analyzed collectively to extract novel biological patterns. Technical variability, or \"batch effects,\" may result from combining samples collected and processed at different times and in different settings. Such variability may distort our ability to extract true underlying biological patterns. As more integrative analysis methods arise and data collections get bigger, we must determine how technical variability affects our ability to detect desired patterns when many experiments are combined.\r\n \r\n \r\n Objective\r\n We sought to determine the extent to which an underlying signal was masked by technical variability by simulating compendia comprising data aggregated across multiple experiments.\r\n \r\n \r\n Method\r\n We developed a generative multi-layer neural network to simulate compendia of gene expression experiments from large-scale microbial and human datasets. We compared simulated compendia before and after introducing varying numbers of sources of undesired variability.\r\n \r\n \r\n Results\r\n The signal from a baseline compendium was obscured when the number of added sources of variability was small. Applying statistical correction methods rescued the underlying signal in these cases. However, as the number of sources of variability increased, it became easier to detect the original signal even without correction. In fact, statistical correction reduced our power to detect the underlying signal.\r\n \r\n \r\n Conclusion\r\n When combining a modest number of experiments, it is best to correct for experiment-specific noise. However, when many experiments are combined, statistical correction reduces our ability to extract underlying patterns.",
3066+
"URL": "https://doi.org/ghhtpf",
3067+
"DOI": "10.1093/gigascience/giaa117",
3068+
"ISSN": "2047-217X",
3069+
"language": "en",
30033070
"author": [
30043071
{
30053072
"family": "Lee",
@@ -3025,20 +3092,21 @@
30253092
"issued": {
30263093
"date-parts": [
30273094
[
3028-
2020,
3095+
"2020",
30293096
11,
30303097
3
30313098
]
30323099
]
30333100
},
3034-
"container-title": "GigaScience",
3035-
"DOI": "10.1093/gigascience/giaa117",
3036-
"volume": "9",
3037-
"issue": "11",
3038-
"page": "giaa117",
3039-
"publisher": "Oxford University Press (OUP)",
3040-
"title": "Correcting for experiment-specific variability in expression compendia can remove underlying signals",
3041-
"URL": "https://doi.org/ghhtpf",
3101+
"accessed": {
3102+
"date-parts": [
3103+
[
3104+
"2021",
3105+
1,
3106+
23
3107+
]
3108+
]
3109+
},
30423110
"PMCID": "PMC7607552",
30433111
"PMID": "33140829",
30443112
"note": "This CSL JSON Item was automatically generated by Manubot v0.4.1 using citation-by-identifier.\nstandard_id: doi:10.1093/gigascience/giaa117"
@@ -5066,7 +5134,7 @@
50665134
[
50675135
"2021",
50685136
1,
5069-
19
5137+
22
50705138
]
50715139
]
50725140
},
@@ -5119,7 +5187,7 @@
51195187
[
51205188
"2021",
51215189
1,
5122-
20
5190+
23
51235191
]
51245192
]
51255193
},
@@ -5136,7 +5204,7 @@
51365204
[
51375205
"2021",
51385206
1,
5139-
20
5207+
23
51405208
]
51415209
]
51425210
},
@@ -5160,7 +5228,7 @@
51605228
[
51615229
"2021",
51625230
1,
5163-
20
5231+
23
51645232
]
51655233
]
51665234
},
@@ -5203,7 +5271,7 @@
52035271
[
52045272
"2021",
52055273
1,
5206-
19
5274+
22
52075275
]
52085276
]
52095277
},
@@ -5254,7 +5322,7 @@
52545322
[
52555323
"2021",
52565324
1,
5257-
19
5325+
22
52585326
]
52595327
]
52605328
},
@@ -5318,7 +5386,7 @@
53185386
[
53195387
"2021",
53205388
1,
5321-
20
5389+
23
53225390
]
53235391
]
53245392
},
@@ -5327,7 +5395,7 @@
53275395
[
53285396
"2021",
53295397
1,
5330-
20
5398+
23
53315399
]
53325400
]
53335401
}
@@ -5343,7 +5411,7 @@
53435411
[
53445412
"2021",
53455413
1,
5346-
20
5414+
23
53475415
]
53485416
]
53495417
},
@@ -5403,7 +5471,7 @@
54035471
[
54045472
"2021",
54055473
1,
5406-
20
5474+
23
54075475
]
54085476
]
54095477
},
@@ -5428,7 +5496,7 @@
54285496
[
54295497
"2021",
54305498
1,
5431-
20
5499+
23
54325500
]
54335501
]
54345502
},
@@ -5445,7 +5513,7 @@
54455513
[
54465514
"2021",
54475515
1,
5448-
20
5516+
23
54495517
]
54505518
]
54515519
},
@@ -5541,7 +5609,7 @@
55415609
[
55425610
"2021",
55435611
1,
5544-
20
5612+
23
55455613
]
55465614
]
55475615
},

0 commit comments

Comments
 (0)