Skip to content

Commit

Permalink
Merge branch 'release/0.5.1'
Browse files Browse the repository at this point in the history
  • Loading branch information
AP6YC committed May 13, 2022
2 parents 0b31d1f + 821fb62 commit 679505e
Show file tree
Hide file tree
Showing 24 changed files with 237 additions and 247 deletions.
135 changes: 9 additions & 126 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
name: CI
# Run on master, tags, or any pull request
on:
# schedule:
# - cron: '0 2 * * *' # Daily at 2 AM UTC (8 PM CST)
# schedule:
# - cron: '0 2 * * *' # Daily at 2 AM UTC (8 PM CST)
push:
branches: [master]
tags: ["*"]
Expand All @@ -15,20 +15,19 @@ jobs:
fail-fast: false
matrix:
version:
# - "1.0" # LTS
- "1.5"
- "1.6" # LTS
- "1" # Latest Release
os:
- ubuntu-latest
# - macOS-latest
- macOS-latest
- windows-latest
arch:
- x64
# - x86
# exclude:
# Test 32-bit only on Linux
# - os: macOS-latest
# arch: x86
- x86
exclude:
# Exclude 32-bit macOS due to Julia support
- os: macOS-latest
arch: x86
# - os: windows-latest
# arch: x86
# include:
Expand Down Expand Up @@ -79,119 +78,3 @@ jobs:
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
path-to-lcov: lcov.info

# slack:
# name: Notify Slack Failure
# needs: test
# runs-on: ubuntu-latest
# if: always() && github.event_name == 'schedule'
# steps:
# - uses: technote-space/workflow-conclusion-action@v2
# - uses: voxmedia/github-action-slack-notify-build@v1
# if: env.WORKFLOW_CONCLUSION == 'failure'
# with:
# channel: nightly-dev
# status: FAILED
# color: danger
# env:
# SLACK_BOT_TOKEN: ${{ secrets.DEV_SLACK_BOT_TOKEN }}

# docs:
# name: Documentation
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v2
# - uses: julia-actions/setup-julia@v1
# with:
# version: '1'
# - run: |
# git config --global user.name name
# git config --global user.email email
# git config --global github.user username
# - run: |
# julia --project=docs -e '
# using Pkg
# Pkg.develop(PackageSpec(path=pwd()))
# Pkg.instantiate()'
# - run: |
# julia --project=docs -e '
# using Documenter: doctest
# using PkgTemplates
# doctest(PkgTemplates)'
# - run: julia --project=docs docs/make.jl
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }}

# # This is a basic workflow to help you get started with Actions

# name: Unit Test

# # Controls when the action will run.
# on:
# # Triggers the workflow on push or pull request events but only for the master branch
# push:
# branches: [ master ]
# pull_request:
# branches: [ master ]

# # Allows you to run this workflow manually from the Actions tab
# workflow_dispatch:

# # A workflow run is made up of one or more jobs that can run sequentially or in parallel
# jobs:
# # This workflow contains a single job called "build"
# build:
# # The type of runner that the job will run on
# # runs-on: ubuntu-latest
# strategy:
# matrix:
# julia: [1.5, latest]
# # Steps represent a sequence of tasks that will be executed as part of the job
# steps:
# # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
# - uses: actions/checkout@v2

# # Runs a single command using the runners shell
# - name: Run a one-line script
# run: echo Hello, world!

# # Runs a set of commands using the runners shell
# - name: Run a multi-line script
# run: |
# echo Add other actions to build,
# echo test, and deploy your project.


# name: Python package

# on: [push]

# jobs:
# build:

# runs-on: ubuntu-latest
# strategy:
# matrix:
# python-version: [2.7, 3.5, 3.6, 3.7, 3.8]

# steps:
# - uses: actions/checkout@v2
# - name: Set up Python ${{ matrix.python-version }}
# uses: actions/setup-python@v2
# with:
# python-version: ${{ matrix.python-version }}
# - name: Install dependencies
# run: |
# python -m pip install --upgrade pip
# pip install flake8 pytest
# if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
# - name: Lint with flake8
# run: |
# # stop the build if there are Python syntax errors or undefined names
# flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
# - name: Test with pytest
# run: |
# pytest
8 changes: 3 additions & 5 deletions .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,16 @@ jobs:
# Cancel ongoing documentation build if pushing to branch again before the previous
# build is finished.
- name: Cancel ongoing documentation builds for previous commits
uses: styfle/cancel-workflow-action@0.6.0
uses: styfle/cancel-workflow-action@0.9.1
with:
access_token: ${{ github.token }}

- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: '1.4'
version: '1.6'
- name: Install dependencies
run: |
pip install scipy
julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
- name: Build and deploy
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name = "AdaptiveResonance"
uuid = "3d72adc0-63d3-4141-bf9b-84450dd0395b"
authors = ["Sasha Petrenko"]
description = "A Julia package for Adaptive Resonance Theory (ART) algorithms."
version = "0.5.0"
version = "0.5.1"

[deps]
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
Expand Down
22 changes: 16 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,23 @@
# AdaptiveResonance

A Julia package for Adaptive Resonance Theory (ART) algorithms.
d

| **Documentation** | **Build Status** | **Coverage** |
|:------------------:|:----------------:|:------------:|
| [![Stable][docs-stable-img]][docs-stable-url] | [![Build Status][ci-img]][ci-url] | [![Codecov][codecov-img]][codecov-url] |
| [![Dev][docs-dev-img]][docs-dev-url] | [![Build Status][appveyor-img]][appveyor-url] | [![Coveralls][coveralls-img]][coveralls-url] |
| **Dependents** | **Date** | **Status** |
| [![deps][deps-img]][deps-url] | [![version][version-img]][version-url] | [![pkgeval][pkgeval-img]][pkgeval-url] |
| **Documentation** | **Testing Status** | **Coverage** | **Reference** |
|:------------------:|:----------------:|:------------:|:-------------:|
| [![Stable][docs-stable-img]][docs-stable-url] | [![Build Status][ci-img]][ci-url] | [![Codecov][codecov-img]][codecov-url] | [![DOI][joss-img]][joss-url] |
| [![Dev][docs-dev-img]][docs-dev-url] | [![Build Status][appveyor-img]][appveyor-url] | [![Coveralls][coveralls-img]][coveralls-url] | [![DOI][zenodo-img]][zenodo-url] |
| **Documentation Build** | **JuliaHub Status** | **Dependents** | **Release** |
| [![Documentation][doc-status-img]][doc-status-url] | [![pkgeval][pkgeval-img]][pkgeval-url] | [![deps][deps-img]][deps-url] | [![version][version-img]][version-url] |

[joss-img]: https://joss.theoj.org/papers/10.21105/joss.03671/status.svg
[joss-url]: https://doi.org/10.21105/joss.03671

[doc-status-img]: https://github.com/AP6YC/AdaptiveResonance.jl/actions/workflows/Documentation.yml/badge.svg
[doc-status-url]: https://github.com/AP6YC/AdaptiveResonance.jl/actions/workflows/Documentation.yml

[zenodo-img]: https://zenodo.org/badge/DOI/10.5281/zenodo.5748453.svg
[zenodo-url]: https://doi.org/10.5281/zenodo.5748453

[deps-img]: https://juliahub.com/docs/AdaptiveResonance/deps.svg
[deps-url]: https://juliahub.com/ui/Packages/AdaptiveResonance/Sm0We?t=2
Expand Down
3 changes: 3 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,6 @@ MLDataUtils = "cc2ba9b6-d476-5e6d-8eaf-a92d5412d41d"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
MultivariateStats = "6f286f6a-111f-5878-ab1e-185364afe411"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"

[compat]
MLDatasets = "0.6"
12 changes: 12 additions & 0 deletions docs/combo.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
"""
combo.jl
This is a convenience script for docs development that makes and live serves the docs locally.
"""


# Make the documentation
include("make.jl")

# Host the documentation locally
include("serve.jl")
Empty file.
19 changes: 11 additions & 8 deletions docs/examples/adaptive_resonance/data_config.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
# Preprocessing of the features occurs as follows:
# 1. The features are linearly normalized from 0 to 1 with respect to each feature with `linear_normalization`.
# This is done according to some known bounds that each feature has.
# 2. The features are then complement coded, meaning that the feature vector is appended to its 1-complement (i.e., x -> [x, 1-x]) with `complement_code`.
# 2. The features are then complement coded, meaning that the feature vector is appended to its 1-complement (i.e., $x \rightarrow \left[x, 1-x\right]$) with `complement_code`.

# This preprocessing has the ultimate consequence that the input features must be bounded.
# This many not be a problem in some offline applications with a fixed dataset, but in others where the bounds are not known, techniques such as sigmoidal limiting are often used to place an artificial limit.
Expand Down Expand Up @@ -45,15 +45,18 @@ fieldnames(AdaptiveResonance.DataConfig)
# In batch training mode, the minimums and maximums are detected automatically; the minimum and maximum values for every feature are saved and used for the preprocessing step at every subsequent iteration.

## Load data
using MLDatasets
using MLDatasets # Iris dataset
using MLDataUtils # Shuffling and splitting

## We will download the Iris dataset for its small size and benchmark use for clustering algorithms.
Iris.download(i_accept_the_terms_of_use=true)
features, labels = Iris.features(), Iris.labels()

## We will then train the FuzzyART module in unsupervised mode and see that the data config is now set
y_hat_train = train!(art, features)
art.config
## Get the iris dataset as a DataFrame
iris = Iris()
## Manipulate the features and labels into a matrix of features and a vector of labels
features, labels = Matrix(iris.features)', vec(Matrix{String}(iris.targets))

# Because the MLDatasets package gives us Iris labels as strings, we will use the `MLDataUtils.convertlabel` method with the `MLLabelUtils.LabelEnc.Indices` type to get a list of integers representing each class:
labels = convertlabel(LabelEnc.Indices{Int}, labels)
unique(labels)

# !!! note
# This automatic detection of feature characteristics only occurs if the `config` is not already setup.
Expand Down
52 changes: 49 additions & 3 deletions docs/examples/adaptive_resonance/incremental-batch.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# ---
# title: Incremental vs. Batch Example
# id: incremental_batch
# cover: ../assets/art.png
# cover: assets/incremental-batch-cover.png
# date: 2021-12-1
# author: "[Sasha Petrenko](https://github.com/AP6YC)"
# julia: 1.6
Expand All @@ -27,8 +27,10 @@ using MLDataUtils # Shuffling and splitting
using Printf # Formatted number printing

# We will download the Iris dataset for its small size and benchmark use for clustering algorithms.
Iris.download(i_accept_the_terms_of_use=true)
features, labels = Iris.features(), Iris.labels()
## Get the iris dataset as a DataFrame
iris = Iris()
## Manipulate the features and labels into a matrix of features and a vector of labels
features, labels = Matrix(iris.features)', vec(Matrix{String}(iris.targets))

# Because the MLDatasets package gives us Iris labels as strings, we will use the `MLDataUtils.convertlabel` method with the `MLLabelUtils.LabelEnc.Indices` type to get a list of integers representing each class:
labels = convertlabel(LabelEnc.Indices{Int}, labels)
Expand Down Expand Up @@ -109,3 +111,47 @@ perf_test_incremental = performance(y_hat_incremental, y_test)
@printf "Incremental training performance: %.4f\n" perf_train_incremental
@printf "Batch testing performance: %.4f\n" perf_test_batch
@printf "Incremental testing performance: %.4f\n" perf_test_incremental

# ## Visualization

# So we showed that the performance and behavior of modules are identical in incremental and batch modes.
# Great!
# Sadly, illustrating this point doesn't lend itself to visualization in any meaningful way.
# Nonetheless, we would like a pretty picture at the end of the experiment to verify that these identical solutions work in the first place.
# Sanity checks are meaningful in their own right, right?

# To do this, we will reduce the dimensionality of the dataset to two dimensions and show in a scatter plot how the modules classify the test data into groups.
# This will be done with principal component analysis (PCA) to cast the points into a 2-D space while trying to preserve the relative distances between points in the higher dimension.
# The process isn't perfect by any means, but it suffices for visualization.

## Import visualization utilities
using Printf # Formatted number printing
using MultivariateStats # Principal component analysis (PCA)
using Plots # Plotting frontend

## Train a PCA model
M = fit(PCA, features; maxoutdim=2)

## Apply the PCA model to the testing set
X_test_pca = transform(M, X_test)

# Now that we have the test points cast into a 2-D set of points, we can create a scatter plot that shows how each point is categorized by the modules.

## Create a scatterplot object from the data with some additional formatting options
scatter(
X_test_pca[1, :], # PCA dimension 1
X_test_pca[2, :], # PCA dimension 2
group = y_hat_batch, # labels belonging to each point
markersize = 8, # size of scatter points
legend = false, # no legend
xtickfontsize = 12, # x-tick size
ytickfontsize = 12, # y-tick size
dpi = 300, # Set the dots-per-inch
xlims = :round, # Round up the x-limits to the nearest whole number
xlabel = "\$PCA_1\$", # x-label
ylabel = "\$PCA_2\$", # y-label
title = (@sprintf "DDVFA Iris Clusters"), # formatted title
)

# This plot shows that the DDVFA modules do well at identifying the structure of the three clusters despite not achieving 100% test performance.
png("assets/incremental-batch-cover") #hide
Loading

2 comments on commit 679505e

@AP6YC
Copy link
Owner Author

@AP6YC AP6YC commented on 679505e May 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator register

Release notes:

This patch implements some cosmetic updates, such as:

  1. Fixing warnings about incremental compilation being broken during module import.
  2. Updating the data loading steps in the DemoCards examples in the documentation.
  3. Expanding upon the visualizations in the examples.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/60150

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.5.1 -m "<description of version>" 679505ef176d92f3de6e4d76b6a572645a0c5edd
git push origin v0.5.1

Please sign in to comment.