You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I can map most of the model names from DeepLCModels to the supplementary table 2 in the 2021 publication. A couple gaps are filled by issue #77 (thanks!).
I cannot find any information about these models, which were added after the publication:
With regards to the other models, these were mostly trained on internal data. I did make the models public, as they could be especially useful for TMT (full_hc_tmt_data_consensus_ticnum_filtered), phosphopeptides (full_hc_phospho_kai_li), or modifications in general (full_hc_mod_deeplc_train_filtered). I am unfortunately unable to give you a timeline on when this data is publicly available.
With regards to multreta, that was an experimental run where the model was iteratively trained on a large number of datasets. Each dataset was considered as an seperate entity and only trained on for a couple of epochs before switching to a new dataset. Although I cannot give any guarantees, it seems this model actually performs very well across a large number of datasets.
I can map most of the model names from DeepLCModels to the supplementary table 2 in the 2021 publication. A couple gaps are filled by issue #77 (thanks!).
I cannot find any information about these models, which were added after the publication:
The PRIDE project gives some clues, of course, but was the data set on MassIVE ever published?
The text was updated successfully, but these errors were encountered: