Skip to content

Commit

Permalink
Merge pull request #11 from DefTruth/dev
Browse files Browse the repository at this point in the history
 feat(pypi): update torchlm to pypi v0.1.2 (#10)
  • Loading branch information
DefTruth authored Feb 14, 2022
2 parents 7d4cf49 + 340052a commit fbb2447
Show file tree
Hide file tree
Showing 31 changed files with 334 additions and 120 deletions.
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ __pycache__
.DS_Store
debug.py
build
dist
dist
torchlm.egg-info
*.sh
1 change: 1 addition & 0 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

98 changes: 35 additions & 63 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<div align='center'>
<img src=https://img.shields.io/badge/PRs-welcome-9cf.svg >
<img src=https://img.shields.io/badge/slack-chat-ffa.svg?logo=slack >
<img src=https://static.pepy.tech/personalized-badge/torchlm?period=total&units=international_system&left_color=grey&right_color=pink&left_text=pypi%20downloads >
<img src=https://img.shields.io/pypi/v/torchlm?color=aff >
<img src=https://img.shields.io/pypi/pyversions/torchlm?color=dfd >
<img src=https://img.shields.io/badge/macos|linux|windows-pass-skyblue.svg >
Expand All @@ -11,11 +11,11 @@


## 🤗 Introduction
**torchlm** is a PyTorch landmarks-only library with **100+ data augmentations**, **training** and **inference**. **torchlm** is only focus on any landmarks detection, such as face landmarks, hand keypoints and body keypoints, etc. It provides **30+** native data augmentations and compatible with **80+** torchvision and albumations's transforms, no matter the input is a np.ndarray or a torch Tensor, **torchlm** will automatically be compatible with different data types and then wrap back to the original type through a autodtype wrapper. Further, in the future **torchlm** will add modules for **training** and **inference**.
**torchlm** is a PyTorch landmarks-only library with **100+ data augmentations**, **training** and **inference**. **torchlm** is only focus on any landmark detection, such as face landmarks, hand keypoints and body keypoints, etc. It provides **30+** native data augmentations and can **bind** with **80+** transforms from torchvision and albumations, no matter the input is a np.ndarray or a torch Tensor, **torchlm** will automatically be compatible with different data types and then wrap it back to the original type through a **autodtype** wrapper. Further, **torchlm** will add modules for **training** and **inference** in the future.

# 🆕 What's New

* [2022/02/13]: Add **30+** native data augmentations and **bind** 80+ torchvision and albumations's transforms.
* [2022/02/13]: Add **30+** native data augmentations and **bind** **80+** torchvision and albumations's transforms.

## 🛠️ Usage

Expand All @@ -36,15 +36,16 @@ pip3 install torchlm -i https://pypi.org/simple/
or install from source.
```shell
# clone torchlm repository locally
git clone --depth=1 https://github.com/DefTruth/torchlm.git
git clone --depth=1 https://github.com/DefTruth/torchlm.git
cd torchlm
# install in editable mode
pip install -e .
```

### Data Augmentation
**torchlm** provides 30+ native data augmentations for landmarks and is compatible with 80+ transforms from torchvision and albumations, no matter the input is a np.ndarray or a torch Tensor, torchlm will automatically be compatible with different data types through a autodtype wrapper.
* use native torchlm's transforms
**torchlm** provides **30+** native data augmentations for landmarks and can **bind** with **80+** transforms from torchvision and albumations through **torchlm.bind** method. Further, **torchlm.bind** provide a `prob` param at bind-level to force any transform or callable be a random-style augmentation. The data augmentations in **torchlm** are `safe` and `simplest`. Any transform operations at runtime cause landmarks outside will be auto dropped to keep the number of landmarks unchanged. The layout format of landmarks is `xy` with shape `(N, 2)`, `N` denotes the number of the input landmarks. No matter the input is a np.ndarray or a torch Tensor, **torchlm** will automatically be compatible with different data types and then wrap it back to the original type through a **autodtype** wrapper.

* use native torchlm transforms
```python
import torchlm
transform = torchlm.LandmarksCompose([
Expand All @@ -57,13 +58,19 @@ transform = torchlm.LandmarksCompose([
torchlm.LandmarksRandomBrightness(prob=0.),
torchlm.LandmarksRandomRotate(40, prob=0.5, bins=8),
torchlm.LandmarksRandomCenterCrop((0.5, 1.0), (0.5, 1.0), prob=0.5),
torchlm.LandmarksResize((256, 256)),
torchlm.LandmarksNormalize(),
torchlm.LandmarksToTensor(),
torchlm.LandmarksToNumpy(),
torchlm.LandmarksUnNormalize()
# ...
])
```
<div align='center'>
<img src='docs/res/10.jpg' height="100px" width="100px">
<img src='docs/res/40.jpg' height="100px" width="100px">
<img src='docs/res/92.jpg' height="100px" width="100px">
<img src='docs/res/234.jpg' height="100px" width="100px">
<img src='docs/res/243.jpg' height="100px" width="100px">
<img src='docs/res/255.jpg' height="100px" width="100px">
<img src='docs/res/388.jpg' height="100px" width="100px">
</div>

* **bind** torchvision and albumations's transform, using **torchlm.bind**
```python
import torchvision
Expand All @@ -72,72 +79,41 @@ import torchlm
transform = torchlm.LandmarksCompose([
# use native torchlm transforms
torchlm.LandmarksRandomScale(prob=0.5),
# ...
# bind torchvision image only transforms
torchlm.bind(torchvision.transforms.GaussianBlur(kernel_size=(5, 25))),
torchlm.bind(torchvision.transforms.GaussianBlur(kernel_size=(5, 25)), prob=0.5), # bind with a given prob
torchlm.bind(torchvision.transforms.RandomAutocontrast(p=0.5)),
torchlm.bind(torchvision.transforms.RandomAdjustSharpness(sharpness_factor=3, p=0.5)),
# bind albumentations image only transforms
torchlm.bind(albumentations.ColorJitter(p=0.5)),
torchlm.bind(albumentations.GlassBlur(p=0.5)),
torchlm.bind(albumentations.RandomShadow(p=0.5)),
# bind albumentations dual transforms
torchlm.bind(albumentations.RandomCrop(height=200, width=200, p=0.5)),
torchlm.bind(albumentations.RandomScale(p=0.5)),
torchlm.bind(albumentations.Rotate(p=0.5)),
torchlm.LandmarksResize((256, 256)),
torchlm.LandmarksNormalize(),
torchlm.LandmarksToTensor(),
torchlm.LandmarksToNumpy(),
torchlm.LandmarksUnNormalize()
# ...
])
```
* **bind** custom callable array or Tensor functions, using **torchlm.bind**

```python
# First, defined your custom functions
def callable_array_noop(
img: np.ndarray,
landmarks: np.ndarray
) -> Tuple[np.ndarray, np.ndarray]:
# Do some transform here ...
def callable_array_noop(img: np.ndarray, landmarks: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
# do some transform here ...
return img.astype(np.uint32), landmarks.astype(np.float32)

def callable_tensor_noop(
img: Tensor,
landmarks: Tensor
) -> Tuple[Tensor, Tensor]:
# Do some transform here ...
def callable_tensor_noop(img: Tensor, landmarks: Tensor) -> Tuple[Tensor, Tensor]:
# do some transform here ...
return img, landmarks
```

```python
# Then, bind your functions and put it into transforms pipeline.
# Then, bind your functions and put it into the transforms pipeline.
transform = torchlm.LandmarksCompose([
# use native torchlm transforms
torchlm.LandmarksRandomScale(prob=0.5),
# ...
# bind torchvision image only transforms
torchlm.bind(torchvision.transforms.GaussianBlur(kernel_size=(5, 25))),
torchlm.bind(torchvision.transforms.RandomAutocontrast(p=0.5)),
torchlm.bind(torchvision.transforms.RandomAdjustSharpness(sharpness_factor=3, p=0.5)),
# bind albumentations image only transforms
torchlm.bind(albumentations.ColorJitter(p=0.5)),
torchlm.bind(albumentations.GlassBlur(p=0.5)),
torchlm.bind(albumentations.RandomShadow(p=0.5)),
# bind albumentations dual transforms
torchlm.bind(albumentations.RandomCrop(height=200, width=200, p=0.5)),
torchlm.bind(albumentations.RandomScale(p=0.5)),
torchlm.bind(albumentations.Rotate(p=0.5)),
# bind custom callable array functions
torchlm.bind(callable_array_noop, bind_type=torchlm.BindEnum.Callable_Array),
# bind custom callable Tensor functions
torchlm.bind(callable_tensor_noop, bind_type=torchlm.BindEnum.Callable_Tensor),
torchlm.LandmarksResize((256, 256)),
torchlm.LandmarksNormalize(),
torchlm.LandmarksToTensor(),
torchlm.LandmarksToNumpy(),
torchlm.LandmarksUnNormalize()
# bind custom callable Tensor functions with a given prob
torchlm.bind(callable_tensor_noop, bind_type=torchlm.BindEnum.Callable_Tensor, prob=0.5),
# ...
])
```
<div align='center'>
Expand Down Expand Up @@ -167,12 +143,8 @@ BindTorchVisionTransform(GaussianBlur())() AutoDtype Info: AutoDtypeEnum.Tensor_
BindTorchVisionTransform(GaussianBlur())() Execution Flag: True
BindAlbumentationsTransform(ColorJitter())() AutoDtype Info: AutoDtypeEnum.Array_InOut
BindAlbumentationsTransform(ColorJitter())() Execution Flag: True
BindArrayCallable(callable_array_noop())() AutoDtype Info: AutoDtypeEnum.Array_InOut
BindArrayCallable(callable_array_noop())() Execution Flag: True
BindTensorCallable(callable_tensor_noop())() AutoDtype Info: AutoDtypeEnum.Tensor_InOut
BindTensorCallable(callable_tensor_noop())() Execution Flag: True
LandmarksUnNormalize() AutoDtype Info: AutoDtypeEnum.Array_InOut
LandmarksUnNormalize() Execution Flag: True
BindTensorCallable(callable_tensor_noop())() Execution Flag: False
```
* Execution Flag: True means current transform was executed successful, False means it was not executed because of the random probability or some Runtime Exceptions(torchlm will should the error infos if debug mode is True).
* AutoDtype Info:
Expand All @@ -181,7 +153,7 @@ LandmarksUnNormalize() Execution Flag: True
* Array_In means current transform needs a np.ndarray input and then output a torch Tensor.
* Tensor_In means current transform needs a torch Tensor input and then output a np.ndarray.

But, is ok if your pass a Tensor to a np.ndarray like transform, **torchlm** will automatically be compatible with different data types and then wrap back to the original type through a autodtype wrapper.
But, is ok if you pass a Tensor to a np.ndarray-like transform, **torchlm** will automatically be compatible with different data types and then wrap it back to the original type through a **autodtype** wrapper.


* Supported Transforms Sets, see [transforms.md](https://github.com/DefTruth/torchlm/blob/main/docs/api/transfroms.md). A detail example can be found at [test/transforms.py](https://github.com/DefTruth/torchlm/blob/main/test/transforms.py).
Expand All @@ -204,14 +176,14 @@ LandmarksUnNormalize() Execution Flag: True
* [ ] ...

## 📖 Documentations
* [ ] Native Data Augmentation's API (TODO)
* [ ] Data Augmentation's API (TODO)
* [ ] ...

## 🎓 License
The code of torchlm is released under the MIT License.
The code of **torchlm** is released under the MIT License.

## 👋 Contributing
If you like this project please consider ⭐ this repo, as it is the simplest way to support me.
## ❤️ Contribution
Please consider ⭐ this repo if you like it, as it is the simplest way to support me.

## 🎓 Acknowledgement
## 👋 Acknowledgement
The implementation of torchlm's transforms borrow the code from [Paperspace](https://github.com/Paperspace/DataAugmentationForObjectDetection/blob/master/data_aug/bbox_util.py) .
1 change: 0 additions & 1 deletion docs/.gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
.idea
__pycache__
.DS_Store
logs
Loading

0 comments on commit fbb2447

Please sign in to comment.