Skip to content

Commit 92b1eef

Browse files
committed
init
1 parent db7a5e8 commit 92b1eef

File tree

189 files changed

+9980
-2
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

189 files changed

+9980
-2
lines changed

.gitignore

+132
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
# Byte-compiled / optimized / DLL files
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
6+
# C extensions
7+
*.so
8+
9+
# Distribution / packaging
10+
.Python
11+
build/
12+
develop-eggs/
13+
dist/
14+
downloads/
15+
eggs/
16+
.eggs/
17+
lib/
18+
lib64/
19+
parts/
20+
sdist/
21+
var/
22+
apex/
23+
wheels/
24+
*.egg-info/
25+
.installed.cfg
26+
*.egg
27+
MANIFEST
28+
29+
# PyInstaller
30+
# Usually these files are written by a python script from a template
31+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
32+
*.manifest
33+
*.spec
34+
35+
# Installer logs
36+
pip-log.txt
37+
pip-delete-this-directory.txt
38+
39+
# Unit test / coverage reports
40+
htmlcov/
41+
.tox/
42+
.coverage
43+
.coverage.*
44+
.cache
45+
nosetests.xml
46+
coverage.xml
47+
*.cover
48+
.hypothesis/
49+
.pytest_cache/
50+
51+
# Translations
52+
*.mo
53+
*.pot
54+
55+
# Django stuff:
56+
*.log
57+
local_settings.py
58+
db.sqlite3
59+
60+
# Flask stuff:
61+
instance/
62+
.webassets-cache
63+
64+
# Scrapy stuff:
65+
.scrapy
66+
67+
# Sphinx documentation
68+
docs/_build/
69+
70+
# PyBuilder
71+
target/
72+
73+
# Jupyter Notebook
74+
.ipynb_checkpoints
75+
76+
# pyenv
77+
.python-version
78+
79+
# celery beat schedule file
80+
celerybeat-schedule
81+
82+
# SageMath parsed files
83+
*.sage.py
84+
85+
# Environments
86+
.env
87+
.venv
88+
env/
89+
venv/
90+
ENV/
91+
env.bak/
92+
venv.bak/
93+
94+
# Spyder project settings
95+
.spyderproject
96+
.spyproject
97+
98+
# Rope project settings
99+
.ropeproject
100+
101+
# mkdocs documentation
102+
/site
103+
104+
# mypy
105+
.mypy_cache/
106+
107+
.vscode
108+
.idea
109+
110+
# custom
111+
*.pkl
112+
*.pkl.json
113+
*.log.json
114+
*.jpg
115+
bash
116+
data
117+
dataset
118+
data_set
119+
output/
120+
work_dirs/
121+
workspace/
122+
tools/exp_bash/
123+
pretrains
124+
125+
# Pytorch
126+
*.pth
127+
128+
*.swp
129+
source.sh
130+
tensorboard.sh
131+
.DS_Store
132+
*.json

INSTALL.md

+68
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Installation
2+
3+
We provide installation instructions for pre-training and fine-tuning experiments here.
4+
5+
## Dependency Setup
6+
7+
Install OpenMixup>=0.2.7 for A2MIM experiments. Here are installation steps with a new conda virtual environment. You can modify the PyTorch version according to your own environment.
8+
```shell
9+
conda create -n a2mim python=3.8 -y
10+
conda activate a2mim
11+
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
12+
pip install openmim
13+
mim install mmcv-full
14+
git clone https://github.com/Westlake-AI/openmixup.git
15+
cd openmixup
16+
python setup.py install
17+
cd ..
18+
rm -r openmixup # you can keep the source code to view implementation details
19+
```
20+
21+
Then, you can setup [MMDetection](https://github.com/open-mmlab/mmdetection/) and [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/) for downstream tasks.
22+
```shell
23+
pip install mmdet
24+
pip install mmseg
25+
```
26+
27+
## Dataset Preparation
28+
29+
It is recommended to symlink your dataset root (assuming `$DATA_ROOT`) to `$A2MIM/data` by `ln -s $DATA_ROOT ./data`. If your folder structure is different, you may need to change the corresponding paths in config files.
30+
31+
### ImageNet
32+
33+
Prepare the meta files of ImageNet from [OpenMixup](https://github.com/Westlake-AI/openmixup) with following scripts:
34+
```shell
35+
mkdir data/meta
36+
cd data/meta
37+
wget https://github.com/Westlake-AI/openmixup/releases/download/dataset/meta.zip
38+
unzip meta.zip
39+
rm meta.zip
40+
```
41+
42+
Download the [ImageNet-1K](http://image-net.org/) classification dataset ([train](https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar) and [val](https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar)) and structure the data as follows.
43+
44+
### Downstream Task Datasets
45+
46+
Download [COCO2017](https://cocodataset.org/#download) and prepare COCO experiments according to the guidelines in [MMDetection](https://github.com/open-mmlab/mmdetection/).
47+
48+
Prepare [ADE20K](https://arxiv.org/abs/1608.05442) according to the [guidelines](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/dataset_prepare.md#prepare-datasets) in MMSegmentation. Please use the 2016 version of ADE20K dataset, which can be downloaded from [ADEChallengeData2016](data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip) or [**Baidu Cloud**](https://pan.baidu.com/s/1EIrXVTOxX-cdhYVfqd9Vng?pwd=7ycz) (7ycz).
49+
50+
At last, the folder looks like:
51+
52+
```
53+
root
54+
├── configs
55+
├── data
56+
│ ├── ade
57+
│ ├── coco
58+
│ ├── meta [used for 'ImageList' dataset]
59+
│ ├── ImageNet
60+
│ │ ├── train
61+
│ │ | ├── n01440764
62+
│ │ | ├── n01443537
63+
│ │ | ...
64+
│ │ | ├── n15075141
65+
│ │ ├── val
66+
│ │ | ├── ILSVRC2012_val_00000001.JPEG
67+
│ │ | ...
68+
```

0 commit comments

Comments
 (0)