diff --git a/models/intel/yolo-v2-ava-0001/description/yolo-v2-ava-0001.md b/models/intel/yolo-v2-ava-0001/description/yolo-v2-ava-0001.md index 2aeca62531f..e99e3121e75 100644 --- a/models/intel/yolo-v2-ava-0001/description/yolo-v2-ava-0001.md +++ b/models/intel/yolo-v2-ava-0001/description/yolo-v2-ava-0001.md @@ -4,8 +4,6 @@ This is a reimplemented and retrained version of the [YOLO v2](https://arxiv.org/abs/1612.08242) object detection network trained with the VOC2012 training dataset. -## Example - ## Specification | Metric | Value | diff --git a/models/intel/yolo-v2-ava-sparse-35-0001/description/yolo-v2-ava-sparse-35-0001.md b/models/intel/yolo-v2-ava-sparse-35-0001/description/yolo-v2-ava-sparse-35-0001.md index bfcab2e4ad8..18d2594a9bd 100644 --- a/models/intel/yolo-v2-ava-sparse-35-0001/description/yolo-v2-ava-sparse-35-0001.md +++ b/models/intel/yolo-v2-ava-sparse-35-0001/description/yolo-v2-ava-sparse-35-0001.md @@ -5,8 +5,6 @@ This is a reimplemented and retrained version of the [YOLO v2](https://arxiv.org/abs/1612.08242) object detection network trained with the VOC2012 training dataset. [Network weight pruning](https://arxiv.org/abs/1710.01878) is applied to sparsify convolution layers (35% of network parameters are set to zeros). -## Example - ## Specification | Metric | Value | diff --git a/models/intel/yolo-v2-ava-sparse-70-0001/description/yolo-v2-ava-sparse-70-0001.md b/models/intel/yolo-v2-ava-sparse-70-0001/description/yolo-v2-ava-sparse-70-0001.md index 706e22c5c4b..bfd42796d2d 100644 --- a/models/intel/yolo-v2-ava-sparse-70-0001/description/yolo-v2-ava-sparse-70-0001.md +++ b/models/intel/yolo-v2-ava-sparse-70-0001/description/yolo-v2-ava-sparse-70-0001.md @@ -5,8 +5,6 @@ This is a reimplemented and retrained version of the [YOLO v2](https://arxiv.org/abs/1612.08242) object detection network trained with the VOC2012 training dataset. [Network weight pruning](https://arxiv.org/abs/1710.01878) is applied to sparsify convolution layers (70% of network parameters are set to zeros). -## Example - ## Specification | Metric | Value | diff --git a/models/intel/yolo-v2-tiny-ava-0001/description/yolo-v2-tiny-ava-0001.md b/models/intel/yolo-v2-tiny-ava-0001/description/yolo-v2-tiny-ava-0001.md index 2c1366fc5d9..54ff07acf9d 100644 --- a/models/intel/yolo-v2-tiny-ava-0001/description/yolo-v2-tiny-ava-0001.md +++ b/models/intel/yolo-v2-tiny-ava-0001/description/yolo-v2-tiny-ava-0001.md @@ -4,8 +4,6 @@ This is a reimplemented and retrained version of the [tiny YOLO v2](https://arxiv.org/abs/1612.08242) object detection network trained with the VOC2012 training dataset. -## Example - ## Specification | Metric | Value | diff --git a/models/intel/yolo-v2-tiny-ava-sparse-30-0001/description/yolo-v2-tiny-ava-sparse-30-0001.md b/models/intel/yolo-v2-tiny-ava-sparse-30-0001/description/yolo-v2-tiny-ava-sparse-30-0001.md index 7c02d254b3e..ed20eaf2435 100644 --- a/models/intel/yolo-v2-tiny-ava-sparse-30-0001/description/yolo-v2-tiny-ava-sparse-30-0001.md +++ b/models/intel/yolo-v2-tiny-ava-sparse-30-0001/description/yolo-v2-tiny-ava-sparse-30-0001.md @@ -5,8 +5,6 @@ This is a reimplemented and retrained version of the [tiny YOLO v2](https://arxiv.org/abs/1612.08242) object detection network trained with the VOC2012 training dataset. [Network weight pruning](https://arxiv.org/abs/1710.01878) is applied to sparsify convolution layers (30% of network parameters are set to zeros). -## Example - ## Specification | Metric | Value | diff --git a/models/intel/yolo-v2-tiny-ava-sparse-60-0001/description/yolo-v2-tiny-ava-sparse-60-0001.md b/models/intel/yolo-v2-tiny-ava-sparse-60-0001/description/yolo-v2-tiny-ava-sparse-60-0001.md index 0ae2e2d3a40..1f5cc32d7f1 100644 --- a/models/intel/yolo-v2-tiny-ava-sparse-60-0001/description/yolo-v2-tiny-ava-sparse-60-0001.md +++ b/models/intel/yolo-v2-tiny-ava-sparse-60-0001/description/yolo-v2-tiny-ava-sparse-60-0001.md @@ -5,8 +5,6 @@ This is a reimplemented and retrained version of the [tiny YOLO v2](https://arxiv.org/abs/1612.08242) object detection network trained with the VOC2012 training dataset. [Network weight pruning](https://arxiv.org/abs/1710.01878) is applied to sparsify convolution layers (60% of network parameters are set to zeros). -## Example - ## Specification | Metric | Value | diff --git a/models/public/Sphereface/Sphereface.md b/models/public/Sphereface/Sphereface.md index 2d2b02bfcfa..5ac0665be71 100644 --- a/models/public/Sphereface/Sphereface.md +++ b/models/public/Sphereface/Sphereface.md @@ -4,8 +4,6 @@ [Deep face recognition under open-set protocol](https://arxiv.org/abs/1704.08063) -## Example - ## Specification | Metric | Value | diff --git a/models/public/aclnet/aclnet.md b/models/public/aclnet/aclnet.md index c45cf71c650..259eb9e2c44 100644 --- a/models/public/aclnet/aclnet.md +++ b/models/public/aclnet/aclnet.md @@ -10,8 +10,6 @@ The model input is a segment of PCM audio samples in [N, C, 1, L] format. The model output for `AclNet` is the sound classifier output for the 53 different environmental sound classes from the internal sound database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/alexnet/alexnet.md b/models/public/alexnet/alexnet.md index 4c45d9b92d6..acbc9870140 100644 --- a/models/public/alexnet/alexnet.md +++ b/models/public/alexnet/alexnet.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x227x227 in BGR The model output for `alexnet` is the usual object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/brain-tumor-segmentation-0001/brain-tumor-segmentation-0001.md b/models/public/brain-tumor-segmentation-0001/brain-tumor-segmentation-0001.md index 49b65ecb0db..2d741a6e991 100644 --- a/models/public/brain-tumor-segmentation-0001/brain-tumor-segmentation-0001.md +++ b/models/public/brain-tumor-segmentation-0001/brain-tumor-segmentation-0001.md @@ -6,8 +6,6 @@ This model was created for participation in the [Brain Tumor Segmentation Challe The model is based on [the corresponding paper](https://arxiv.org/abs/1810.04008), where authors present deep cascaded approach for automatic brain tumor segmentation. The paper describes modifications to 3D UNet architecture and specific augmentation strategy to efficiently handle multimodal MRI input. Besides this, the approach to enhance segmentation quality with context obtained from models of the same topology operating on downscaled data is introduced. Each input modality has its own encoder which are later fused together to produce single output segmentation. -## Example - ## Specification | Metric | Value | diff --git a/models/public/brain-tumor-segmentation-0002/brain-tumor-segmentation-0002.md b/models/public/brain-tumor-segmentation-0002/brain-tumor-segmentation-0002.md index 3c5d0fc6e6c..93a24dc1243 100644 --- a/models/public/brain-tumor-segmentation-0002/brain-tumor-segmentation-0002.md +++ b/models/public/brain-tumor-segmentation-0002/brain-tumor-segmentation-0002.md @@ -4,8 +4,6 @@ This model was created for participation in the [Brain Tumor Segmentation Challenge](https://www.med.upenn.edu/cbica/brats2019/registration.html) (BraTS) 2019. It has the UNet architecture trained with residual blocks. -## Example - ## Specification | Metric | Value | diff --git a/models/public/caffenet/caffenet.md b/models/public/caffenet/caffenet.md index 99e592427f4..d75d6726a5c 100644 --- a/models/public/caffenet/caffenet.md +++ b/models/public/caffenet/caffenet.md @@ -4,8 +4,6 @@ CaffeNet\* model is used for classification. For details see [paper](https://arxiv.org/abs/1408.5093). -## Example - ## Specification | Metric | Value | diff --git a/models/public/colorization-siggraph/colorization-siggraph.md b/models/public/colorization-siggraph/colorization-siggraph.md index b0047d8f721..481bb857d11 100644 --- a/models/public/colorization-siggraph/colorization-siggraph.md +++ b/models/public/colorization-siggraph/colorization-siggraph.md @@ -9,8 +9,6 @@ For details about this family of models, check out the [repository](https://gith Model consumes as input L-channel of LAB-image (also user points and binary mask as optional inputs). Model give as output predict A- and B-channels of LAB-image. -## Example - ## Specification | Metric | Value | diff --git a/models/public/colorization-v2/colorization-v2.md b/models/public/colorization-v2/colorization-v2.md index 1c4a63b44d4..6adc03a0007 100644 --- a/models/public/colorization-v2/colorization-v2.md +++ b/models/public/colorization-v2/colorization-v2.md @@ -9,8 +9,6 @@ For details about this family of models, check out the [repository](https://gith Model consumes as input L-channel of LAB-image. Model give as output predict A- and B-channels of LAB-image. -## Example - ## Specification | Metric | Value | diff --git a/models/public/ctdet_coco_dlav0_384/ctdet_coco_dlav0_384.md b/models/public/ctdet_coco_dlav0_384/ctdet_coco_dlav0_384.md index 18a0205b0a7..ad4e9f5bcdd 100644 --- a/models/public/ctdet_coco_dlav0_384/ctdet_coco_dlav0_384.md +++ b/models/public/ctdet_coco_dlav0_384/ctdet_coco_dlav0_384.md @@ -30,8 +30,6 @@ git apply /path/to/pytorch-onnx.patch python convert.py ctdet --load_model /path/to/downloaded/weights.pth --exp_id coco_dlav0_384 --arch dlav0_34 --input_res 384 --gpus -1 ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/ctdet_coco_dlav0_512/ctdet_coco_dlav0_512.md b/models/public/ctdet_coco_dlav0_512/ctdet_coco_dlav0_512.md index f46c637febf..f16f78398ee 100644 --- a/models/public/ctdet_coco_dlav0_512/ctdet_coco_dlav0_512.md +++ b/models/public/ctdet_coco_dlav0_512/ctdet_coco_dlav0_512.md @@ -30,8 +30,6 @@ git apply /path/to/pytorch-onnx.patch python convert.py ctdet --load_model /path/to/downloaded/weights.pth --exp_id coco_dlav0_512 --arch dlav0_34 --input_res 512 --gpus -1 ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/ctpn/ctpn.md b/models/public/ctpn/ctpn.md index 831f47fe349..7114f21ea9f 100644 --- a/models/public/ctpn/ctpn.md +++ b/models/public/ctpn/ctpn.md @@ -4,8 +4,6 @@ Detecting Text in Natural Image with Connectionist Text Proposal Network. For details see [paper](https://arxiv.org/abs/1609.03605). -## Example - ## Specification | Metric | Value | diff --git a/models/public/deeplabv3/deeplabv3.md b/models/public/deeplabv3/deeplabv3.md index e9cb418ab3a..ef5b9cd4a2d 100644 --- a/models/public/deeplabv3/deeplabv3.md +++ b/models/public/deeplabv3/deeplabv3.md @@ -4,8 +4,6 @@ DeepLab is a state-of-art deep learning model for semantic image segmentation. For details see [paper](https://arxiv.org/abs/1706.05587). -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-121-caffe2/densenet-121-caffe2.md b/models/public/densenet-121-caffe2/densenet-121-caffe2.md index 77fc3e2642e..a5a8c5a900c 100644 --- a/models/public/densenet-121-caffe2/densenet-121-caffe2.md +++ b/models/public/densenet-121-caffe2/densenet-121-caffe2.md @@ -8,8 +8,6 @@ was converted from Caffe\* to Caffe2\* format. For details see repository , paper . -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-121-tf/densenet-121-tf.md b/models/public/densenet-121-tf/densenet-121-tf.md index 9ae4d634adf..0afd8c45963 100644 --- a/models/public/densenet-121-tf/densenet-121-tf.md +++ b/models/public/densenet-121-tf/densenet-121-tf.md @@ -5,8 +5,6 @@ This is a TensorFlow\* version of `densenet-121` model, one of the DenseNet\* group of models designed to perform image classification. The weights were converted from DenseNet-Keras Models. For details, see [repository](https://github.com/pudae/tensorflow-densenet/) and [paper](https://arxiv.org/abs/1608.06993). -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-121/densenet-121.md b/models/public/densenet-121/densenet-121.md index 29330a1dcac..08733d28bc9 100644 --- a/models/public/densenet-121/densenet-121.md +++ b/models/public/densenet-121/densenet-121.md @@ -9,8 +9,6 @@ been pretrained on the ImageNet image database. For details about this family of models, check out the [repository](https://github.com/shicai/DenseNet-Caffe). -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-161-tf/densenet-161-tf.md b/models/public/densenet-161-tf/densenet-161-tf.md index f6f4431adc4..6a0e1c2a4c1 100644 --- a/models/public/densenet-161-tf/densenet-161-tf.md +++ b/models/public/densenet-161-tf/densenet-161-tf.md @@ -5,8 +5,6 @@ This is a TensorFlow\* version of `densenet-161` model, one of the DenseNet group of models designed to perform image classification. The weights were converted from DenseNet-Keras Models. For details see [repository](https://github.com/pudae/tensorflow-densenet/), [paper](https://arxiv.org/abs/1608.06993). -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-161/densenet-161.md b/models/public/densenet-161/densenet-161.md index a75e1ff8c81..551a9ea835b 100644 --- a/models/public/densenet-161/densenet-161.md +++ b/models/public/densenet-161/densenet-161.md @@ -18,8 +18,6 @@ by 0.017. The model output for `densenet-161` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-169-tf/densenet-169-tf.md b/models/public/densenet-169-tf/densenet-169-tf.md index 6e1b0cbcee6..dfff9aef43d 100644 --- a/models/public/densenet-169-tf/densenet-169-tf.md +++ b/models/public/densenet-169-tf/densenet-169-tf.md @@ -5,8 +5,6 @@ This is a TensorFlow\* version of `densenet-169` model, one of the DenseNet group of models designed to perform image classification. The weights were converted from DenseNet-Keras Models. For details, see [repository](https://github.com/pudae/tensorflow-densenet/) and [paper](https://arxiv.org/abs/1608.06993). -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-169/densenet-169.md b/models/public/densenet-169/densenet-169.md index ef85889475e..5b40b9d497e 100644 --- a/models/public/densenet-169/densenet-169.md +++ b/models/public/densenet-169/densenet-169.md @@ -18,8 +18,6 @@ by 0.017. The model output for `densenet-169` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/densenet-201/densenet-201.md b/models/public/densenet-201/densenet-201.md index 54399d02807..f766d74e9fd 100644 --- a/models/public/densenet-201/densenet-201.md +++ b/models/public/densenet-201/densenet-201.md @@ -18,8 +18,6 @@ by 0.017. The model output for `densenet-201` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md b/models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md index 673211a6cd7..2f4a63fddb2 100644 --- a/models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md +++ b/models/public/efficientnet-b0-pytorch/efficientnet-b0-pytorch.md @@ -12,8 +12,6 @@ order. Before passing the image blob to the network, do the following: The model output for `efficientnet-b0-pytorch` is the typical object classifier output for 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b0/efficientnet-b0.md b/models/public/efficientnet-b0/efficientnet-b0.md index 70f9c7adc45..0ad661c6bcc 100644 --- a/models/public/efficientnet-b0/efficientnet-b0.md +++ b/models/public/efficientnet-b0/efficientnet-b0.md @@ -8,8 +8,6 @@ This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet\* image database. For details about this family of models, check out the [TensorFlow Cloud TPU repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md b/models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md index 400b4da0d08..7d4af31660c 100644 --- a/models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md +++ b/models/public/efficientnet-b0_auto_aug/efficientnet-b0_auto_aug.md @@ -9,8 +9,6 @@ This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet\* image database. For details about this family of models, check out the [TensorFlow Cloud TPU repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.md b/models/public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.md index dd3e046cfdb..46bd6fb4fb9 100644 --- a/models/public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.md +++ b/models/public/efficientnet-b5-pytorch/efficientnet-b5-pytorch.md @@ -14,8 +14,6 @@ order. Before passing the image blob to the network, do the following: The model output for `efficientnet-b5-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b5/efficientnet-b5.md b/models/public/efficientnet-b5/efficientnet-b5.md index 05dc569dbd2..fe03daf3e88 100644 --- a/models/public/efficientnet-b5/efficientnet-b5.md +++ b/models/public/efficientnet-b5/efficientnet-b5.md @@ -8,8 +8,6 @@ This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet\* image database. For details about this family of models, check out the [TensorFlow Cloud TPU repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md b/models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md index 567b1d4e7e8..cd35c2eae31 100644 --- a/models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md +++ b/models/public/efficientnet-b7-pytorch/efficientnet-b7-pytorch.md @@ -13,8 +13,6 @@ order. Before passing the image blob to the network, do the following: The model output for `efficientnet-b7-pytorch` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md b/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md index c0039977204..578793710e2 100644 --- a/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md +++ b/models/public/efficientnet-b7_auto_aug/efficientnet-b7_auto_aug.md @@ -9,8 +9,6 @@ This model was pretrained in TensorFlow\*. All the EfficientNet models have been pretrained on the ImageNet\* image database. For details about this family of models, check out the [TensorFlow Cloud TPU repository](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet). -## Example - ## Specification | Metric | Value | diff --git a/models/public/face-recognition-mobilefacenet-arcface/face-recognition-mobilefacenet-arcface.md b/models/public/face-recognition-mobilefacenet-arcface/face-recognition-mobilefacenet-arcface.md index 6e1640b3cbf..36aa0566b19 100644 --- a/models/public/face-recognition-mobilefacenet-arcface/face-recognition-mobilefacenet-arcface.md +++ b/models/public/face-recognition-mobilefacenet-arcface/face-recognition-mobilefacenet-arcface.md @@ -6,8 +6,6 @@ The original name of the model is [MobileFaceNet,ArcFace@ms1m-refine-v1](https:/ [Deep face recognition net with MobileFaceNet backbone and Arcface loss](https://arxiv.org/abs/1801.07698) -## Example - ## Specification | Metric | Value | diff --git a/models/public/face-recognition-resnet100-arcface/face-recognition-resnet100-arcface.md b/models/public/face-recognition-resnet100-arcface/face-recognition-resnet100-arcface.md index c5de233d025..90fd385aa87 100644 --- a/models/public/face-recognition-resnet100-arcface/face-recognition-resnet100-arcface.md +++ b/models/public/face-recognition-resnet100-arcface/face-recognition-resnet100-arcface.md @@ -6,8 +6,6 @@ The original name of the model is [LResNet100E-IR,ArcFace@ms1m-refine-v2](https: [Deep face recognition net with ResNet100 backbone and Arcface loss](https://arxiv.org/abs/1801.07698) -## Example - ## Specification | Metric | Value | diff --git a/models/public/face-recognition-resnet34-arcface/face-recognition-resnet34-arcface.md b/models/public/face-recognition-resnet34-arcface/face-recognition-resnet34-arcface.md index c5e704fc828..bd2817287fa 100644 --- a/models/public/face-recognition-resnet34-arcface/face-recognition-resnet34-arcface.md +++ b/models/public/face-recognition-resnet34-arcface/face-recognition-resnet34-arcface.md @@ -6,8 +6,6 @@ The original name of the model is [LResNet34E-IR,ArcFace@ms1m-refine-v1](https:/ [Deep face recognition net with ResNet34 backbone and Arcface loss](https://arxiv.org/abs/1801.07698) -## Example - ## Specification | Metric | Value | diff --git a/models/public/face-recognition-resnet50-arcface/face-recognition-resnet50-arcface.md b/models/public/face-recognition-resnet50-arcface/face-recognition-resnet50-arcface.md index 924852be692..96655b9dd12 100644 --- a/models/public/face-recognition-resnet50-arcface/face-recognition-resnet50-arcface.md +++ b/models/public/face-recognition-resnet50-arcface/face-recognition-resnet50-arcface.md @@ -6,8 +6,6 @@ The original name of the model is [LResNet50E-IR,ArcFace@ms1m-refine-v1](https:/ [Deep face recognition net with ResNet50 backbone and Arcface loss](https://arxiv.org/abs/1801.07698) -## Example - ## Specification | Metric | Value | diff --git a/models/public/faceboxes-pytorch/faceboxes-pytorch.md b/models/public/faceboxes-pytorch/faceboxes-pytorch.md index e30a5f07a51..37602ad1117 100644 --- a/models/public/faceboxes-pytorch/faceboxes-pytorch.md +++ b/models/public/faceboxes-pytorch/faceboxes-pytorch.md @@ -5,8 +5,6 @@ FaceBoxes: A CPU Real-time Face Detector with High Accuracy. For details see the [repository](https://github.com/zisianw/FaceBoxes.PyTorch), [paper](https://arxiv.org/pdf/1708.05234.pdf) -## Example - ## Specification | Metric | Value | diff --git a/models/public/facenet-20180408-102900/facenet-20180408-102900.md b/models/public/facenet-20180408-102900/facenet-20180408-102900.md index 5929be5fa8c..0ee3fa34e57 100644 --- a/models/public/facenet-20180408-102900/facenet-20180408-102900.md +++ b/models/public/facenet-20180408-102900/facenet-20180408-102900.md @@ -4,8 +4,6 @@ FaceNet: A Unified Embedding for Face Recognition and Clustering. For details see the [repository](https://github.com/davidsandberg/facenet/), [paper](https://arxiv.org/abs/1503.03832) -## Example - ## Specification | Metric | Value | diff --git a/models/public/fast-neural-style-mosaic-onnx/fast-neural-style-mosaic-onnx.md b/models/public/fast-neural-style-mosaic-onnx/fast-neural-style-mosaic-onnx.md index 16f22cdcb46..51617cda532 100644 --- a/models/public/fast-neural-style-mosaic-onnx/fast-neural-style-mosaic-onnx.md +++ b/models/public/fast-neural-style-mosaic-onnx/fast-neural-style-mosaic-onnx.md @@ -10,9 +10,6 @@ Transfer and Super-Resolution](https://arxiv.org/abs/1603.08155) along with models are provided in the [repository](https://github.com/onnx/models). -## Example - - ## Specification | Metric | Value | diff --git a/models/public/faster_rcnn_inception_resnet_v2_atrous_coco/faster_rcnn_inception_resnet_v2_atrous_coco.md b/models/public/faster_rcnn_inception_resnet_v2_atrous_coco/faster_rcnn_inception_resnet_v2_atrous_coco.md index ec8c8c020f4..10c5f0a28cc 100644 --- a/models/public/faster_rcnn_inception_resnet_v2_atrous_coco/faster_rcnn_inception_resnet_v2_atrous_coco.md +++ b/models/public/faster_rcnn_inception_resnet_v2_atrous_coco/faster_rcnn_inception_resnet_v2_atrous_coco.md @@ -4,8 +4,6 @@ Faster R-CNN with Inception Resnet v2 Atrous version. Used for object detection. For details see the [paper](https://arxiv.org/abs/1506.01497v3). -## Example - ## Specification | Metric | Value | diff --git a/models/public/faster_rcnn_inception_v2_coco/faster_rcnn_inception_v2_coco.md b/models/public/faster_rcnn_inception_v2_coco/faster_rcnn_inception_v2_coco.md index 898390cfc3e..ac73f015980 100644 --- a/models/public/faster_rcnn_inception_v2_coco/faster_rcnn_inception_v2_coco.md +++ b/models/public/faster_rcnn_inception_v2_coco/faster_rcnn_inception_v2_coco.md @@ -4,8 +4,6 @@ Faster R-CNN with Inception v2. Used for object detection. For details, see the [paper](https://arxiv.org/abs/1506.01497v3). -## Example - ## Specification | Metric | Value | diff --git a/models/public/faster_rcnn_resnet101_coco/faster_rcnn_resnet101_coco.md b/models/public/faster_rcnn_resnet101_coco/faster_rcnn_resnet101_coco.md index 6d507f420e9..b6934eb3146 100644 --- a/models/public/faster_rcnn_resnet101_coco/faster_rcnn_resnet101_coco.md +++ b/models/public/faster_rcnn_resnet101_coco/faster_rcnn_resnet101_coco.md @@ -4,8 +4,6 @@ Faster R-CNN Resnet-101 model. Used for object detection. For details, see the [paper](https://arxiv.org/abs/1506.01497v3). -## Example - ## Specification | Metric | Value | diff --git a/models/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco.md b/models/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco.md index a0a7107854e..06732c77e00 100644 --- a/models/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco.md +++ b/models/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco.md @@ -4,8 +4,6 @@ Faster R-CNN Resnet-50 model. Used for object detection. For details, see the [paper](https://arxiv.org/abs/1506.01497v3). -## Example - ## Specification | Metric | Value | diff --git a/models/public/gmcnn-places2-tf/gmcnn-places2-tf.md b/models/public/gmcnn-places2-tf/gmcnn-places2-tf.md index 5b05f6a1c3d..60b6168b674 100644 --- a/models/public/gmcnn-places2-tf/gmcnn-places2-tf.md +++ b/models/public/gmcnn-places2-tf/gmcnn-places2-tf.md @@ -30,8 +30,6 @@ git apply path/to/freeze_model.patch python3 freeze_model.py --ckpt_dir path/to/downloaded_weights --save_dir path/to/save_directory ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v1-tf/googlenet-v1-tf.md b/models/public/googlenet-v1-tf/googlenet-v1-tf.md index b8194a4e39d..856aaf082b8 100644 --- a/models/public/googlenet-v1-tf/googlenet-v1-tf.md +++ b/models/public/googlenet-v1-tf/googlenet-v1-tf.md @@ -32,8 +32,6 @@ pip install tensorflow==1.14.0 python3 freeze.py --ckpt path/to/inception_v1.ckpt --name inception_v1 --num_classes 1001 --output InceptionV1/Logits/Predictions/Softmax ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v1/googlenet-v1.md b/models/public/googlenet-v1/googlenet-v1.md index fa2d6205461..c5140621911 100644 --- a/models/public/googlenet-v1/googlenet-v1.md +++ b/models/public/googlenet-v1/googlenet-v1.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x224x224 in BGR The model output for `googlenet-v1` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v2-tf/googlenet-v2-tf.md b/models/public/googlenet-v2-tf/googlenet-v2-tf.md index cab0a18e586..19c14d8ead5 100644 --- a/models/public/googlenet-v2-tf/googlenet-v2-tf.md +++ b/models/public/googlenet-v2-tf/googlenet-v2-tf.md @@ -32,8 +32,6 @@ pip install tensorflow==1.14.0 python3 freeze.py --ckpt path/to/inception_v2.ckpt --name inception_v2 --num_classes 1001 --output InceptionV2/Predictions/Softmax ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v2/googlenet-v2.md b/models/public/googlenet-v2/googlenet-v2.md index f42dbf0a17c..8510a796338 100644 --- a/models/public/googlenet-v2/googlenet-v2.md +++ b/models/public/googlenet-v2/googlenet-v2.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x224x224 in BGR The model output for `googlenet-v2` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v3-pytorch/googlenet-v3-pytorch.md b/models/public/googlenet-v3-pytorch/googlenet-v3-pytorch.md index 495669dd298..cbdcf616fa7 100644 --- a/models/public/googlenet-v3-pytorch/googlenet-v3-pytorch.md +++ b/models/public/googlenet-v3-pytorch/googlenet-v3-pytorch.md @@ -13,8 +13,6 @@ in RGB order. The model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v3/googlenet-v3.md b/models/public/googlenet-v3/googlenet-v3.md index 0f0d1d971a8..d2e0b5141e9 100644 --- a/models/public/googlenet-v3/googlenet-v3.md +++ b/models/public/googlenet-v3/googlenet-v3.md @@ -4,8 +4,6 @@ The `googlenet-v3` model is the first of the Inception family of models designed to perform image classification. For details about this family of models, check out the [paper](https://arxiv.org/abs/1602.07261). -## Example - ## Specification | Metric | Value | diff --git a/models/public/googlenet-v4-tf/googlenet-v4-tf.md b/models/public/googlenet-v4-tf/googlenet-v4-tf.md index e563f644ffb..146e8d5d8b1 100644 --- a/models/public/googlenet-v4-tf/googlenet-v4-tf.md +++ b/models/public/googlenet-v4-tf/googlenet-v4-tf.md @@ -32,8 +32,6 @@ pip install tensorflow==1.14.0 python3 freeze.py --ckpt path/to/inception_v4.ckpt --name inception_v4 --num_classes 1001 --output InceptionV4/Logits/Predictions ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/hbonet-0.25/hbonet-0.25.md b/models/public/hbonet-0.25/hbonet-0.25.md index ee56b194527..42269a30823 100644 --- a/models/public/hbonet-0.25/hbonet-0.25.md +++ b/models/public/hbonet-0.25/hbonet-0.25.md @@ -4,8 +4,6 @@ The `hbonet-0.25` model is one of the classification models from [repository](https://github.com/d-li14/HBONet) with `width_mult=0.25` -## Example - ## Specification | Metric | Value | diff --git a/models/public/hbonet-0.5/hbonet-0.5.md b/models/public/hbonet-0.5/hbonet-0.5.md index 92ec6cc903f..789c9c37098 100644 --- a/models/public/hbonet-0.5/hbonet-0.5.md +++ b/models/public/hbonet-0.5/hbonet-0.5.md @@ -4,8 +4,6 @@ The `hbonet-0.5` model is one of the classification models from [repository](https://github.com/d-li14/HBONet) with `width_mult=0.5` -## Example - ## Specification | Metric | Value | diff --git a/models/public/hbonet-1.0/hbonet-1.0.md b/models/public/hbonet-1.0/hbonet-1.0.md index 2b562b59394..1c71e1ef9c4 100644 --- a/models/public/hbonet-1.0/hbonet-1.0.md +++ b/models/public/hbonet-1.0/hbonet-1.0.md @@ -4,8 +4,6 @@ The `hbonet-1.0` model is one of the classification models from [repository](https://github.com/d-li14/HBONet) with `width_mult=1.0` -## Example - ## Specification | Metric | Value | diff --git a/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.md b/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.md index 33bca70fd98..a1b421c859d 100644 --- a/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.md +++ b/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.md @@ -8,8 +8,6 @@ This is PyTorch\* implementation based on retaining high resolution representati and pretrained on ADE20k dataset. For details about implementation of model, check out the [Semantic Segmentation on MIT ADE20K dataset in PyTorch](https://github.com/CSAILVision/semantic-segmentation-pytorch) repository. -## Example - ## Specification | Metric | Value | diff --git a/models/public/inception-resnet-v2-tf/inception-resnet-v2-tf.md b/models/public/inception-resnet-v2-tf/inception-resnet-v2-tf.md index 376ee9984c1..dae97ed9fb7 100644 --- a/models/public/inception-resnet-v2-tf/inception-resnet-v2-tf.md +++ b/models/public/inception-resnet-v2-tf/inception-resnet-v2-tf.md @@ -4,8 +4,6 @@ The `inception-resnet-v2` model is one of the Inception family of models designed to perform image classification. For details about this family of models, check out the [paper](https://arxiv.org/abs/1602.07261). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mask_rcnn_inception_resnet_v2_atrous_coco/mask_rcnn_inception_resnet_v2_atrous_coco.md b/models/public/mask_rcnn_inception_resnet_v2_atrous_coco/mask_rcnn_inception_resnet_v2_atrous_coco.md index 70d62b6e2b4..afe6e9802b6 100644 --- a/models/public/mask_rcnn_inception_resnet_v2_atrous_coco/mask_rcnn_inception_resnet_v2_atrous_coco.md +++ b/models/public/mask_rcnn_inception_resnet_v2_atrous_coco/mask_rcnn_inception_resnet_v2_atrous_coco.md @@ -4,8 +4,6 @@ Mask R-CNN Inception Resnet V2 Atrous is trained on COCO dataset and used for object instance segmentation. For details, see a [paper](https://arxiv.org/abs/1703.06870). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mask_rcnn_inception_v2_coco/mask_rcnn_inception_v2_coco.md b/models/public/mask_rcnn_inception_v2_coco/mask_rcnn_inception_v2_coco.md index 6f2f400276a..d19daaa48d1 100644 --- a/models/public/mask_rcnn_inception_v2_coco/mask_rcnn_inception_v2_coco.md +++ b/models/public/mask_rcnn_inception_v2_coco/mask_rcnn_inception_v2_coco.md @@ -5,8 +5,6 @@ Mask R-CNN Inception V2 trained on the COCO dataset. The model is used for object instance segmentation. For details, see a [paper](https://arxiv.org/abs/1703.06870). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mask_rcnn_resnet101_atrous_coco/mask_rcnn_resnet101_atrous_coco.md b/models/public/mask_rcnn_resnet101_atrous_coco/mask_rcnn_resnet101_atrous_coco.md index f3a6f81df12..3d92bb643cd 100644 --- a/models/public/mask_rcnn_resnet101_atrous_coco/mask_rcnn_resnet101_atrous_coco.md +++ b/models/public/mask_rcnn_resnet101_atrous_coco/mask_rcnn_resnet101_atrous_coco.md @@ -4,8 +4,6 @@ Mask R-CNN Resnet101 Atrous is trained on COCO dataset and used for object instance segmentation. For details, see a [paper](https://arxiv.org/abs/1703.06870). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mask_rcnn_resnet50_atrous_coco/mask_rcnn_resnet50_atrous_coco.md b/models/public/mask_rcnn_resnet50_atrous_coco/mask_rcnn_resnet50_atrous_coco.md index 13d5e60cf43..2752a794269 100644 --- a/models/public/mask_rcnn_resnet50_atrous_coco/mask_rcnn_resnet50_atrous_coco.md +++ b/models/public/mask_rcnn_resnet50_atrous_coco/mask_rcnn_resnet50_atrous_coco.md @@ -5,8 +5,6 @@ Mask R-CNN Resnet50 Atrous trained on COCO dataset. It is used for object instance segmentation. For details, see the [paper](https://arxiv.org/abs/1703.06870). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-ssd/mobilenet-ssd.md b/models/public/mobilenet-ssd/mobilenet-ssd.md index e46c8ccfd2e..4b8a88902b9 100644 --- a/models/public/mobilenet-ssd/mobilenet-ssd.md +++ b/models/public/mobilenet-ssd/mobilenet-ssd.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x300x300 in BGR The model output is a typical vector containing the tracked object data, as previously described. -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v1-0.25-128/mobilenet-v1-0.25-128.md b/models/public/mobilenet-v1-0.25-128/mobilenet-v1-0.25-128.md index eb7c553fbc2..c01e046357e 100644 --- a/models/public/mobilenet-v1-0.25-128/mobilenet-v1-0.25-128.md +++ b/models/public/mobilenet-v1-0.25-128/mobilenet-v1-0.25-128.md @@ -4,8 +4,6 @@ `mobilenet-v1-0.25-128` is one of MobileNets - small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models are used. For details, see [paper](https://arxiv.org/abs/1704.04861). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v1-0.50-160/mobilenet-v1-0.50-160.md b/models/public/mobilenet-v1-0.50-160/mobilenet-v1-0.50-160.md index aba49ba9cee..142255fe84c 100644 --- a/models/public/mobilenet-v1-0.50-160/mobilenet-v1-0.50-160.md +++ b/models/public/mobilenet-v1-0.50-160/mobilenet-v1-0.50-160.md @@ -4,8 +4,6 @@ `mobilenet-v1-0.50-160` is one of MobileNets - small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models are used. For details, see [paper](https://arxiv.org/abs/1704.04861). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v1-0.50-224/mobilenet-v1-0.50-224.md b/models/public/mobilenet-v1-0.50-224/mobilenet-v1-0.50-224.md index dd8eb4e7720..d99d3ba2d31 100644 --- a/models/public/mobilenet-v1-0.50-224/mobilenet-v1-0.50-224.md +++ b/models/public/mobilenet-v1-0.50-224/mobilenet-v1-0.50-224.md @@ -4,8 +4,6 @@ `mobilenet-v1-0.50-224` is one of MobileNets - small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models are used. For details, see [paper](https://arxiv.org/abs/1704.04861). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v1-1.0-224-tf/mobilenet-v1-1.0-224-tf.md b/models/public/mobilenet-v1-1.0-224-tf/mobilenet-v1-1.0-224-tf.md index ba664276452..5cf175fce42 100644 --- a/models/public/mobilenet-v1-1.0-224-tf/mobilenet-v1-1.0-224-tf.md +++ b/models/public/mobilenet-v1-1.0-224-tf/mobilenet-v1-1.0-224-tf.md @@ -4,8 +4,6 @@ `mobilenet-v1-1.0-224` is one of MobileNets - small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models are used. For details, see the [paper](https://arxiv.org/abs/1704.04861). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v1-1.0-224/mobilenet-v1-1.0-224.md b/models/public/mobilenet-v1-1.0-224/mobilenet-v1-1.0-224.md index 67771927906..421fb3a88e5 100644 --- a/models/public/mobilenet-v1-1.0-224/mobilenet-v1-1.0-224.md +++ b/models/public/mobilenet-v1-1.0-224/mobilenet-v1-1.0-224.md @@ -4,8 +4,6 @@ `mobilenet-v1-1.0-224` is one of [MobileNet V1 architecture](https://arxiv.org/abs/1704.04861) with the width multiplier 1.0 and resolution 224. It is small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models are used. -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md b/models/public/mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md index dc7e2b967de..83cc45d6587 100644 --- a/models/public/mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md +++ b/models/public/mobilenet-v2-1.0-224/mobilenet-v2-1.0-224.md @@ -4,8 +4,6 @@ `mobilenet-v2-1.0-224` is one of MobileNet\* models, which are small, low-latency, low-power, and parameterized to meet the resource constraints of a variety of use cases. They can be used for classification, detection, embeddings, and segmentation like other popular large-scale models. For details, see the [paper](https://arxiv.org/abs/1704.04861). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v2-1.4-224/mobilenet-v2-1.4-224.md b/models/public/mobilenet-v2-1.4-224/mobilenet-v2-1.4-224.md index cbb7d37274b..83fac64d3b7 100644 --- a/models/public/mobilenet-v2-1.4-224/mobilenet-v2-1.4-224.md +++ b/models/public/mobilenet-v2-1.4-224/mobilenet-v2-1.4-224.md @@ -4,8 +4,6 @@ `mobilenet-v2-1.4-224` is one of MobileNets - small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models are used. For details, see the [paper](https://arxiv.org/abs/1704.04861). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v2-pytorch/mobilenet-v2-pytorch.md b/models/public/mobilenet-v2-pytorch/mobilenet-v2-pytorch.md index 0f570c0d561..63cbd3b7e8d 100644 --- a/models/public/mobilenet-v2-pytorch/mobilenet-v2-pytorch.md +++ b/models/public/mobilenet-v2-pytorch/mobilenet-v2-pytorch.md @@ -13,8 +13,6 @@ in RGB order. The model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v2/mobilenet-v2.md b/models/public/mobilenet-v2/mobilenet-v2.md index daf952b1e7f..f5a9b0a84b7 100644 --- a/models/public/mobilenet-v2/mobilenet-v2.md +++ b/models/public/mobilenet-v2/mobilenet-v2.md @@ -4,8 +4,6 @@ [MobileNet V2](https://arxiv.org/abs/1801.04381) -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v3-large-1.0-224-tf/mobilenet-v3-large-1.0-224-tf.md b/models/public/mobilenet-v3-large-1.0-224-tf/mobilenet-v3-large-1.0-224-tf.md index c02ccc4b222..848530223d4 100644 --- a/models/public/mobilenet-v3-large-1.0-224-tf/mobilenet-v3-large-1.0-224-tf.md +++ b/models/public/mobilenet-v3-large-1.0-224-tf/mobilenet-v3-large-1.0-224-tf.md @@ -7,8 +7,6 @@ based on a combination of complementary search techniques as well as a novel arc `mobilenet-v3-large-1.0-224-tf` is targeted for high resource use cases. For details see [paper](https://arxiv.org/abs/1905.02244). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mobilenet-v3-small-1.0-224-tf/mobilenet-v3-small-1.0-224-tf.md b/models/public/mobilenet-v3-small-1.0-224-tf/mobilenet-v3-small-1.0-224-tf.md index 2d419dfefe6..8622d7ed8c9 100644 --- a/models/public/mobilenet-v3-small-1.0-224-tf/mobilenet-v3-small-1.0-224-tf.md +++ b/models/public/mobilenet-v3-small-1.0-224-tf/mobilenet-v3-small-1.0-224-tf.md @@ -7,8 +7,6 @@ based on a combination of complementary search techniques as well as a novel arc `mobilenet-v3-small-1.0-224-tf` is targeted for low resource use cases. For details see [paper](https://arxiv.org/abs/1905.02244). -## Example - ## Specification | Metric | Value | diff --git a/models/public/mozilla-deepspeech-0.6.1/mozilla-deepspeech-0.6.1.md b/models/public/mozilla-deepspeech-0.6.1/mozilla-deepspeech-0.6.1.md index 5f055fa677a..0a3a8353517 100644 --- a/models/public/mozilla-deepspeech-0.6.1/mozilla-deepspeech-0.6.1.md +++ b/models/public/mozilla-deepspeech-0.6.1/mozilla-deepspeech-0.6.1.md @@ -10,8 +10,6 @@ For details on the original DeepSpeech, see paper . -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-densenet-121-0.125/octave-densenet-121-0.125.md b/models/public/octave-densenet-121-0.125/octave-densenet-121-0.125.md index e691f520e09..223de5908f1 100644 --- a/models/public/octave-densenet-121-0.125/octave-densenet-121-0.125.md +++ b/models/public/octave-densenet-121-0.125/octave-densenet-121-0.125.md @@ -5,8 +5,6 @@ The `octave-densenet-121-0.125` model is a modification of [`densenet-121`](https://arxiv.org/abs/1608.06993) with Octave convolutions from [Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution](https://arxiv.org/abs/1904.05049) with `alpha=0.125`. Like the original model, this model is designed for image classification. For details about family of Octave Convolution models, check out the [repository](https://github.com/facebookresearch/OctConv). -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-resnet-101-0.125/octave-resnet-101-0.125.md b/models/public/octave-resnet-101-0.125/octave-resnet-101-0.125.md index eb82e2c35bc..480524a50ba 100644 --- a/models/public/octave-resnet-101-0.125/octave-resnet-101-0.125.md +++ b/models/public/octave-resnet-101-0.125/octave-resnet-101-0.125.md @@ -5,8 +5,6 @@ The `octave-resnet-101-0.125` model is a modification of [ResNet-101](https://arxiv.org/abs/1512.03385) with Octave convolutions from [Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution](https://arxiv.org/abs/1904.05049) with `alpha=0.125`. Like the original model, this model is designed for image classification. For details about family of Octave Convolution models, check out the [repository](https://github.com/facebookresearch/OctConv). -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-resnet-200-0.125/octave-resnet-200-0.125.md b/models/public/octave-resnet-200-0.125/octave-resnet-200-0.125.md index efbaf61090d..2f6d3cf106d 100644 --- a/models/public/octave-resnet-200-0.125/octave-resnet-200-0.125.md +++ b/models/public/octave-resnet-200-0.125/octave-resnet-200-0.125.md @@ -4,8 +4,6 @@ The `octave-resnet-200-0.125` model is a modification of [`resnet-200`](https://arxiv.org/abs/1512.03385) with Octave convolutions from [Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution](https://arxiv.org/abs/1904.05049) with `alpha=0.125`. Like the original model, this model is designed for image classification. For details about family of Octave Convolution models, check out the [repository](https://github.com/facebookresearch/OctConv). -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-resnet-26-0.25/octave-resnet-26-0.25.md b/models/public/octave-resnet-26-0.25/octave-resnet-26-0.25.md index 31a2344ad17..af677883d81 100644 --- a/models/public/octave-resnet-26-0.25/octave-resnet-26-0.25.md +++ b/models/public/octave-resnet-26-0.25/octave-resnet-26-0.25.md @@ -5,8 +5,6 @@ The `octave-resnet-26-0.25` model is a modification of [`resnet-26`](https://arxiv.org/abs/1512.03385) with Octave convolutions from [Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution](https://arxiv.org/abs/1904.05049) with `alpha=0.25`. Like the original model, this model is designed for image classification. For details about family of Octave Convolution models, check out the [repository](https://github.com/facebookresearch/OctConv). -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-resnet-50-0.125/octave-resnet-50-0.125.md b/models/public/octave-resnet-50-0.125/octave-resnet-50-0.125.md index 7abcaea0cf3..a186384e54e 100644 --- a/models/public/octave-resnet-50-0.125/octave-resnet-50-0.125.md +++ b/models/public/octave-resnet-50-0.125/octave-resnet-50-0.125.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x224x224 in RGB The model output for `octave-resnet-50-0.125` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-resnext-101-0.25/octave-resnext-101-0.25.md b/models/public/octave-resnext-101-0.25/octave-resnext-101-0.25.md index cb3d307da22..5be3b2f720b 100644 --- a/models/public/octave-resnext-101-0.25/octave-resnext-101-0.25.md +++ b/models/public/octave-resnext-101-0.25/octave-resnext-101-0.25.md @@ -4,8 +4,6 @@ The `octave-resnext-101-0.25` model is a modification of [`resnext-101`](https://arxiv.org/abs/1611.05431) with Octave convolutions from [Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution](https://arxiv.org/abs/1904.05049) with `alpha=0.25`. Like the original model, this model is designed for image classification. For details about family of Octave Convolution models, check out the [repository](https://github.com/facebookresearch/OctConv). -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-resnext-50-0.25/octave-resnext-50-0.25.md b/models/public/octave-resnext-50-0.25/octave-resnext-50-0.25.md index 5ddf8c03525..f6b6bcd7e3c 100644 --- a/models/public/octave-resnext-50-0.25/octave-resnext-50-0.25.md +++ b/models/public/octave-resnext-50-0.25/octave-resnext-50-0.25.md @@ -4,8 +4,6 @@ The `octave-resnext-50-0.25` model is a modification of [`resnext-50`](https://arxiv.org/abs/1611.05431) with Octave convolutions from [Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution](https://arxiv.org/abs/1904.05049) with `alpha=0.25`. Like the original model, this model is designed for image classification. For details about family of Octave Convolution models, check out the [repository](https://github.com/facebookresearch/OctConv). -## Example - ## Specification | Metric | Value | diff --git a/models/public/octave-se-resnet-50-0.125/octave-se-resnet-50-0.125.md b/models/public/octave-se-resnet-50-0.125/octave-se-resnet-50-0.125.md index 10ae83985bf..b796fd4af2a 100644 --- a/models/public/octave-se-resnet-50-0.125/octave-se-resnet-50-0.125.md +++ b/models/public/octave-se-resnet-50-0.125/octave-se-resnet-50-0.125.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x224x224 in RGB The model output for `octave-se-resnet-50-0.125` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/resnest-50-pytorch/resnest-50-pytorch.md b/models/public/resnest-50-pytorch/resnest-50-pytorch.md index 29e8c4df580..57b759ec4c9 100644 --- a/models/public/resnest-50-pytorch/resnest-50-pytorch.md +++ b/models/public/resnest-50-pytorch/resnest-50-pytorch.md @@ -10,8 +10,6 @@ The model output is typical object classifier for the 1000 different classificat For details see [repository](https://github.com/zhanghang1989/ResNeSt) and [paper](https://arxiv.org/pdf/2004.08955.pdf). -## Example - ## Specification | Metric | Value | diff --git a/models/public/resnet-18-pytorch/resnet-18-pytorch.md b/models/public/resnet-18-pytorch/resnet-18-pytorch.md index cb1457b51a5..6b12f421e17 100644 --- a/models/public/resnet-18-pytorch/resnet-18-pytorch.md +++ b/models/public/resnet-18-pytorch/resnet-18-pytorch.md @@ -13,8 +13,6 @@ in RGB order. The model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/resnet-34-pytorch/resnet-34-pytorch.md b/models/public/resnet-34-pytorch/resnet-34-pytorch.md index 311f4bed32f..1b19adc7346 100644 --- a/models/public/resnet-34-pytorch/resnet-34-pytorch.md +++ b/models/public/resnet-34-pytorch/resnet-34-pytorch.md @@ -13,8 +13,6 @@ in RGB order. The model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/resnet-50-caffe2/resnet-50-caffe2.md b/models/public/resnet-50-caffe2/resnet-50-caffe2.md index ed73710b611..ef746394ed6 100644 --- a/models/public/resnet-50-caffe2/resnet-50-caffe2.md +++ b/models/public/resnet-50-caffe2/resnet-50-caffe2.md @@ -7,8 +7,6 @@ This model was converted from Caffe\* to Caffe2\* format. For details see repository , paper . -## Example - ## Specification | Metric | Value | diff --git a/models/public/resnet-50-pytorch/resnet-50-pytorch.md b/models/public/resnet-50-pytorch/resnet-50-pytorch.md index 3f60baf3bdf..30c9edd2f91 100644 --- a/models/public/resnet-50-pytorch/resnet-50-pytorch.md +++ b/models/public/resnet-50-pytorch/resnet-50-pytorch.md @@ -13,8 +13,6 @@ in RGB order. The model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/resnet-50-tf/resnet-50-tf.md b/models/public/resnet-50-tf/resnet-50-tf.md index c20b0e24df2..51d7da99e0f 100644 --- a/models/public/resnet-50-tf/resnet-50-tf.md +++ b/models/public/resnet-50-tf/resnet-50-tf.md @@ -17,8 +17,6 @@ For details see [paper](https://arxiv.org/abs/1512.03385), python3 freeze_saved_model.py --saved_model_dir path/to/downloaded/saved_model --save_file path/to/resulting/frozen_graph.pb ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/retinanet-tf/retinanet-tf.md b/models/public/retinanet-tf/retinanet-tf.md index f1609ebc450..a4841e6ccdf 100644 --- a/models/public/retinanet-tf/retinanet-tf.md +++ b/models/public/retinanet-tf/retinanet-tf.md @@ -29,8 +29,6 @@ converted to TensorFlow\* protobuf format. For details, see [paper](https://arxi python keras_to_tensorflow.py --input_model=.h5 --output_model=.pb ``` -## Example - ## Specification | Metric | Value | diff --git a/models/public/rfcn-resnet101-coco-tf/rfcn-resnet101-coco-tf.md b/models/public/rfcn-resnet101-coco-tf/rfcn-resnet101-coco-tf.md index 7e7cdfe82b8..15ab7527558 100644 --- a/models/public/rfcn-resnet101-coco-tf/rfcn-resnet101-coco-tf.md +++ b/models/public/rfcn-resnet101-coco-tf/rfcn-resnet101-coco-tf.md @@ -4,8 +4,6 @@ R-FCN Resnet-101 model, pretrained on COCO\* dataset. Used for object detection. For details, see the [paper](https://arxiv.org/abs/1605.06409). -## Example - ## Specification | Metric | Value | diff --git a/models/public/se-inception/se-inception.md b/models/public/se-inception/se-inception.md index afa10cc9637..602b33a05b0 100644 --- a/models/public/se-inception/se-inception.md +++ b/models/public/se-inception/se-inception.md @@ -4,8 +4,6 @@ [BN-Inception with Squeeze-and-Excitation blocks](https://arxiv.org/abs/1709.01507) -## Example - ## Specification | Metric | Value | diff --git a/models/public/se-resnet-101/se-resnet-101.md b/models/public/se-resnet-101/se-resnet-101.md index b841cdffb55..519b705d874 100644 --- a/models/public/se-resnet-101/se-resnet-101.md +++ b/models/public/se-resnet-101/se-resnet-101.md @@ -4,8 +4,6 @@ [ResNet-101 with Squeeze-and-Excitation blocks](https://arxiv.org/abs/1709.01507) -## Example - ## Specification | Metric | Value | diff --git a/models/public/se-resnet-152/se-resnet-152.md b/models/public/se-resnet-152/se-resnet-152.md index bbc155d7320..2b687ea860b 100644 --- a/models/public/se-resnet-152/se-resnet-152.md +++ b/models/public/se-resnet-152/se-resnet-152.md @@ -4,8 +4,6 @@ [ResNet-152 with Squeeze-and-Excitation blocks](https://arxiv.org/abs/1709.01507) -## Example - ## Specification | Metric | Value | diff --git a/models/public/se-resnet-50/se-resnet-50.md b/models/public/se-resnet-50/se-resnet-50.md index eaddda36109..c7597a30467 100644 --- a/models/public/se-resnet-50/se-resnet-50.md +++ b/models/public/se-resnet-50/se-resnet-50.md @@ -4,8 +4,6 @@ [ResNet-50 with Squeeze-and-Excitation blocks](https://arxiv.org/abs/1709.01507) -## Example - ## Specification | Metric | Value | diff --git a/models/public/se-resnext-101/se-resnext-101.md b/models/public/se-resnext-101/se-resnext-101.md index dd1e127ab2f..676d8b707d4 100644 --- a/models/public/se-resnext-101/se-resnext-101.md +++ b/models/public/se-resnext-101/se-resnext-101.md @@ -4,8 +4,6 @@ [ResNext-101 with Squeeze-and-Excitation blocks](https://arxiv.org/abs/1709.01507) -## Example - ## Specification | Metric | Value | diff --git a/models/public/se-resnext-50/se-resnext-50.md b/models/public/se-resnext-50/se-resnext-50.md index 847bff52667..034bfb5720c 100644 --- a/models/public/se-resnext-50/se-resnext-50.md +++ b/models/public/se-resnext-50/se-resnext-50.md @@ -4,8 +4,6 @@ [ResNext-50 with Squeeze-and-Excitation blocks](https://arxiv.org/abs/1709.01507) -## Example - ## Specification | Metric | Value | diff --git a/models/public/shufflenet-v2-x1.0/shufflenet-v2-x1.0.md b/models/public/shufflenet-v2-x1.0/shufflenet-v2-x1.0.md index 2ffbc93d09c..1cccc80687a 100644 --- a/models/public/shufflenet-v2-x1.0/shufflenet-v2-x1.0.md +++ b/models/public/shufflenet-v2-x1.0/shufflenet-v2-x1.0.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of "1x3x224x224" in RG The model output is typical object classifier for the 1000 different classifications matching with those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/squeezenet1.0/squeezenet1.0.md b/models/public/squeezenet1.0/squeezenet1.0.md index eebf9862d0f..f2a77edbaa9 100644 --- a/models/public/squeezenet1.0/squeezenet1.0.md +++ b/models/public/squeezenet1.0/squeezenet1.0.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x227x227 in BGR The model output for `squeezenet1.0` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md b/models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md index 4596ea65280..3b9ea0abe22 100644 --- a/models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md +++ b/models/public/squeezenet1.1-caffe2/squeezenet1.1-caffe2.md @@ -7,8 +7,6 @@ This model was converted from Caffe\* to Caffe2\* format. For details see repository , paper . -## Example - ## Specification | Metric | Value | diff --git a/models/public/squeezenet1.1/squeezenet1.1.md b/models/public/squeezenet1.1/squeezenet1.1.md index 0acc99e3ce5..20cba3eb7c0 100644 --- a/models/public/squeezenet1.1/squeezenet1.1.md +++ b/models/public/squeezenet1.1/squeezenet1.1.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x227x227 in BGR The model output for `squeezenet1.1` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/ssd-resnet34-1200-onnx/ssd-resnet34-1200-onnx.md b/models/public/ssd-resnet34-1200-onnx/ssd-resnet34-1200-onnx.md index bf119135345..2ca41511c4e 100644 --- a/models/public/ssd-resnet34-1200-onnx/ssd-resnet34-1200-onnx.md +++ b/models/public/ssd-resnet34-1200-onnx/ssd-resnet34-1200-onnx.md @@ -4,8 +4,6 @@ The `ssd-resnet-34-1200-onnx` model is a multiscale SSD based on ResNet-34 backbone network intended to perform object detection. The model has been trained from the Common Objects in Context (COCO) image dataset. This model is pretrained in PyTorch\* framework and converted to ONNX\* format. For additional information refer to [repository](https://github.com/mlperf/inference/tree/master/vision/classification_and_detection). -## Example - ## Specification | Metric | Value | diff --git a/models/public/ssd_mobilenet_v1_coco/ssd_mobilenet_v1_coco.md b/models/public/ssd_mobilenet_v1_coco/ssd_mobilenet_v1_coco.md index cc878be0708..95794e1396c 100644 --- a/models/public/ssd_mobilenet_v1_coco/ssd_mobilenet_v1_coco.md +++ b/models/public/ssd_mobilenet_v1_coco/ssd_mobilenet_v1_coco.md @@ -4,8 +4,6 @@ The `ssd_mobilenet_v1_coco` model is a [Single-Shot multibox Detection (SSD)](https://arxiv.org/abs/1801.04381) network intended to perform object detection. The difference between this model and the `mobilenet-ssd` is that there the `mobilenet-ssd` can only detect face, the `ssd_mobilenet_v1_coco` model can detect objects. -## Example - ## Specification | Metric | Value | diff --git a/models/public/ssd_mobilenet_v1_fpn_coco/ssd_mobilenet_v1_fpn_coco.md b/models/public/ssd_mobilenet_v1_fpn_coco/ssd_mobilenet_v1_fpn_coco.md index 3e0875445b1..60b92b3cb53 100644 --- a/models/public/ssd_mobilenet_v1_fpn_coco/ssd_mobilenet_v1_fpn_coco.md +++ b/models/public/ssd_mobilenet_v1_fpn_coco/ssd_mobilenet_v1_fpn_coco.md @@ -4,8 +4,6 @@ MobileNetV1 FPN is used for object detection. For details, see the [paper](https://arxiv.org/abs/1807.03284). -## Example - ## Specification | Metric | Value | diff --git a/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.md b/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.md index bc00def432b..6dd7226e8a6 100644 --- a/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.md +++ b/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x300x300 in RGB The model output is a typical vector containing the tracked object data, as previously described. Note that the "class_id" data is now significant and should be used to determine the classification for any detected object. -## Example - ## Specification | Metric | Value | diff --git a/models/public/ssd_resnet50_v1_fpn_coco/ssd_resnet50_v1_fpn_coco.md b/models/public/ssd_resnet50_v1_fpn_coco/ssd_resnet50_v1_fpn_coco.md index a4c63f140b8..17a1835cafc 100644 --- a/models/public/ssd_resnet50_v1_fpn_coco/ssd_resnet50_v1_fpn_coco.md +++ b/models/public/ssd_resnet50_v1_fpn_coco/ssd_resnet50_v1_fpn_coco.md @@ -7,8 +7,6 @@ The model has been trained from the Common Objects in Context (COCO) image datas For details see the [repository](https://github.com/tensorflow/models/tree/master/research/object_detection) and [paper](https://arxiv.org/abs/1708.02002). -## Example - ## Specification | Metric | Value | diff --git a/models/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2.md b/models/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2.md index 6b3e4fac0de..a15c0c823c0 100644 --- a/models/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2.md +++ b/models/public/ssdlite_mobilenet_v2/ssdlite_mobilenet_v2.md @@ -4,8 +4,6 @@ The `ssdlite_mobilenet_v2` model is used for object detection. For details, see the [paper](https://arxiv.org/abs/1801.04381), MobileNetV2: Inverted Residuals and Linear Bottlenecks. -## Example - ## Specification | Metric | Value | diff --git a/models/public/vgg16/vgg16.md b/models/public/vgg16/vgg16.md index a9914e00bde..ffb358d424e 100644 --- a/models/public/vgg16/vgg16.md +++ b/models/public/vgg16/vgg16.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of "1x3x224x224" in BG The model output for `vgg16` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/vgg19-caffe2/vgg19-caffe2.md b/models/public/vgg19-caffe2/vgg19-caffe2.md index 3500bc0d5aa..784661bbe9b 100644 --- a/models/public/vgg19-caffe2/vgg19-caffe2.md +++ b/models/public/vgg19-caffe2/vgg19-caffe2.md @@ -6,7 +6,6 @@ This is a Caffe2\* version of `vgg19` model, designed to perform image classific This model was converted from Caffe\* to Caffe2\* format. For details see repository , paper . -## Example ## Specification diff --git a/models/public/vgg19/vgg19.md b/models/public/vgg19/vgg19.md index 96f8ccc1160..99fe4f67947 100644 --- a/models/public/vgg19/vgg19.md +++ b/models/public/vgg19/vgg19.md @@ -8,8 +8,6 @@ The model input is a blob that consists of a single image of 1x3x224x224 in BGR The model output for `vgg19` is the typical object classifier output for the 1000 different classifications matching those in the ImageNet database. -## Example - ## Specification | Metric | Value | diff --git a/models/public/yolact-resnet50-fpn-pytorch/yolact-resnet50-fpn-pytorch.md b/models/public/yolact-resnet50-fpn-pytorch/yolact-resnet50-fpn-pytorch.md index 9d6465a77cf..4c5f2858e37 100644 --- a/models/public/yolact-resnet50-fpn-pytorch/yolact-resnet50-fpn-pytorch.md +++ b/models/public/yolact-resnet50-fpn-pytorch/yolact-resnet50-fpn-pytorch.md @@ -5,8 +5,6 @@ YOLACT ResNet 50 is a simple, fully convolutional model for real-time instance segmentation described in "YOLACT: Real-time Instance Segmentation" [paper](https://arxiv.org/abs/1904.02689). Model pretrained in Pytorch\* on COCO dataset. For details, see the [repository](https://github.com/dbolya/yolact). -## Example - ## Specification | Metric | Value |