Skip to content

Commit

Permalink
v0.2.0 commit
Browse files Browse the repository at this point in the history
  • Loading branch information
SartajBhuvaji committed Feb 4, 2024
1 parent 06e2ad3 commit 315fe37
Show file tree
Hide file tree
Showing 28 changed files with 10,473 additions and 2 deletions.
98 changes: 98 additions & 0 deletions .github/workflows/main.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
name: Build

on:
workflow_dispatch:
release:
types: [published]
push:
branches:
- main
pull_request:
branches:
- "*"

env:
PROJECT_NAME: foo

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9, "3.10", "3.11"]

steps:
- uses: actions/checkout@v1
with:
fetch-depth: 9
submodules: false

- name: Use Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}

- uses: actions/cache@v1
id: depcache
with:
path: deps
key: requirements-pip-${{ matrix.python-version }}-${{ hashFiles('requirements.txt') }}

- name: Download dependencies
if: steps.depcache.outputs.cache-hit != 'true'
run: |
pip download --dest=deps -r requirements.txt
- name: Install dependencies
run: |
pip install -U --no-index --find-links=deps deps/*
- name: Run tests
run: |
pytest --doctest-modules --junitxml=junit/pytest-results-${{ matrix.python-version }}.xml --cov=$PROJECT_NAME --cov-report=xml tests/
- name: Upload pytest test results
uses: actions/upload-artifact@master
with:
name: pytest-results-${{ matrix.python-version }}
path: junit/pytest-results-${{ matrix.python-version }}.xml
if: always()

- name: Install distribution dependencies
run: pip install --upgrade twine setuptools wheel
if: matrix.python-version == 3.9

- name: Create distribution package
run: python setup.py sdist bdist_wheel
if: matrix.python-version == 3.9

- name: Upload distribution package
uses: actions/upload-artifact@master
with:
name: dist-package-${{ matrix.python-version }}
path: dist
if: matrix.python-version == 3.9

publish:
runs-on: ubuntu-latest
needs: build
if: github.event_name == 'release'
steps:
- name: Download a distribution artifact
uses: actions/download-artifact@v2
with:
name: dist-package-3.9
path: dist
- name: Publish distribution 📦 to Test PyPI
uses: pypa/gh-action-pypi-publish@master
with:
skip_existing: true
user: __token__
password: ${{ secrets.test_pypi_password }}
repository_url: https://test.pypi.org/legacy/

- name: Publish distribution 📦 to PyPI
uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.pypi_password }}
160 changes: 160 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2024 Sartaj Bhuvaji

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
63 changes: 61 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,61 @@
# SDGnE
Synthetic Data Generation and Evaluation
<br />
<div align="center">
<a href="https://github.com/SartajBhuvaji/SDGnE">
<img src="img/logo.png" alt="logo" width="80" height="80">
</a>

<h3 align="center">SDGnE</h3>
<h4 align="center">Synthetic Data Generation and Evaluation</h3><br >

<p align="center">
Seattle University: Data Science Research
<br />
<br />
Guide <br />
<a href="https://github.com/baew-seattleu">Dr. Wan Bae</a> <br /><br />
Students
<br />
<a href="https://github.com/SartajBhuvaji">Sartaj Bhuvaji</a><br>
<a href="https://github.com/Siddheshwari19">Siddheshwari Bankar</a>

SDGNE - Synthetic Data Generation and Evaluation
</p>
</div>

[![Build](https://github.com/SartajBhuvaji/SDGnE/actions/workflows/main.yaml/badge.svg)](https://github.com/SartajBhuvaji/SDGnE/actions/workflows/main.yaml)


## About
- SDGnE (Synthetic Data Generation and Evaluation) is a Python package designed to generate synthetic data and evaluate its quality using neural network models.
- This tool is intended for developers and researchers who require synthetic datasets for testing and development.
- The current dittto version `v0.2.0` uses <i>Autoencoders</i> and <i>SMOTE</i> to generate synthetic data.

## Getting Started
`pip install sdgne`

## Notebooks
To get started, we have created notebook for the Autoencoder and SMOTE algorithm.

### Auto Encoder
Autoencoders are a class of neural networks designed for unsupervised learning and representing features in a smaller space. They consist of an encoder and a decoder, intending to learn the input data's compressed representation (encoding). We leverage this architecture to generate synthetic data.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tfihhB8TeC47EYbNgB-uU-3p2KJLmkmC?usp=sharing)

### SMOTE
SMOTE, abbreviated as Synthetic Minority Oversampling Technique, is used to generate synthetic data from the original dataset. Over the years, several variants of SMOTE have been developed, each tailored to specific scenarios and requirements. These variants employ distinct methodologies and innovations to enhance the generation of synthetic data, thereby improving model performance by ensuring a more balanced distribution of classes. We provide a few SMOTE variants for synthetic data generation.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/18kAMJR0VBtfC3swIkbuTPWd0Sb0mDVCo?usp=sharing)

### Comparison
In this notebook, we will compare the `Single Encoder Autoencoder` and the `SMOTE Algorithm` for synthetic data generation. We will generate synthetic data using both the algorithms and perform statistical evaluation.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1VAM6hGRPvoLfS8x_WYDbexAc7V4I00_h?usp=sharing)

## Features

- **Data Generation**: Create synthetic datasets that mimic the statistical properties of real-world data.
- **Neural Autoencoders**: Utilize various autoencoder architectures to learn data representations.
- **Evaluation Metrics**: Assess the quality of synthetic data using built-in evaluation metrics.
- **Extensibility**: Easily extend the package with custom data generators and evaluators.


1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# GITHUB PAGE
Binary file added img/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 315fe37

Please sign in to comment.