We firstly conduct an empirical study to comprehensively explore the intended usage context of pre-trained models in model repositories. Specifically, inspired by the code reuse wisdom in software engineering, we envision such information described in the model card, FactSheet, and model repositories to draw out our pre/post-conditions; and conduct an exploratory study of 1908 pre-trained models on six model repositories (i.e., the Tensorflow Hub, Pytorch Hub, Model Zoo, Wolfram neural net repository, Nvidia, and Hugging Face) to investigate the gap between documentation guidance and actual specification.
-
Notifications
You must be signed in to change notification settings - Fork 0
OpenSELab/modelreuse
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
The pre-trained models publised in model repositories
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published