Skip to content

Commit

Permalink
Merge branch 'add_Aug_DiskCachedDataset' of https://github.com/MinaKh…
Browse files Browse the repository at this point in the history
…/tonic into add_Aug_DiskCachedDataset

merging last minor modifications of the branch with latest tonim master
  • Loading branch information
MinaKh committed May 23, 2024
2 parents 0ef6582 + fdb6dc9 commit 43fcbe1
Show file tree
Hide file tree
Showing 18 changed files with 346 additions and 83 deletions.
6 changes: 2 additions & 4 deletions .github/workflows/ci-pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ jobs:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-2022]
python-version: ["3.8", "3.9", "3.10", "3.11"]
python-version: ["3.8", "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- if: matrix.os == 'ubuntu-latest'
Expand All @@ -21,7 +21,6 @@ jobs:
python-version: ${{ matrix.python-version }}
- name: Install requirements
run: |
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install -r test/requirements.txt
pip install -r test/torch_requirements.txt
pip install .
Expand All @@ -43,7 +42,6 @@ jobs:
python-version: 3.9
- name: Generate coverage report
run: |
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install -r test/requirements.txt
pip install -r test/torch_requirements.txt
pip install .
Expand All @@ -65,8 +63,8 @@ jobs:
python-version: 3.9
- name: Install dependencies
run: |
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install -r docs/requirements.txt
pip install -r test/torch_requirements.txt
pip install .
- name: Build documentation
run: cd docs && make clean && make html # Use SPHINXOPTS="-W" to fail on warning.
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ data
generated
auto_examples
.vscode
AUTHORS
ChangeLog

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[![contributors](https://img.shields.io/github/contributors-anon/neuromorphs/tonic)](https://github.com/neuromorphs/tonic/pulse)
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/neuromorphs/tonic/main?labpath=docs%2Ftutorials)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5079802.svg)](https://doi.org/10.5281/zenodo.5079802)
[![Discord](https://img.shields.io/discord/852094154188259338)](https://discord.gg/V6FHBZURkg)
[![Discord](https://img.shields.io/discord/1044548629622439977)](https://discord.gg/qubbM4uPuA)

**Tonic** is a tool to facilitate the download, manipulation and loading of event-based/spike-based data. It's like PyTorch Vision but for neuromorphic data!

Expand Down
10 changes: 9 additions & 1 deletion docs/datasets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,14 @@ Star tracking

EBSSA

Eye tracking
-------------------
.. autosummary::
:toctree: generated/
:template: class_dataset.rst

ThreeET_Eyetracking

.. currentmodule:: tonic.prototype.datasets

Prototype iterable datasets
Expand All @@ -65,4 +73,4 @@ Prototype iterable datasets
NCARS
STMNIST
Gen1AutomotiveDetection
Gen4AutomotiveDetectionMini
Gen4AutomotiveDetectionMini
7 changes: 3 additions & 4 deletions docs/getting_involved/communication_channels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,10 @@ Communication channels

Discord
-------
We would be very happy if you got in touch with us, so please don't hesitate!
The easiest way is to join our Discord channel. There we can reply more or less
instantly. The #tonic channel is part of SynSense's public space along other channels
The easiest way to get in touch with us is to via Discord. There we can reply more or less
instantly. The #tonic channel is part of Open Neuromorphic's public space along other channels
for all things revolving around SNN training.
The link to join is https://discord.gg/V6FHBZURkg.
The link to join is https://discord.gg/qubbM4uPuA

Github
------
Expand Down
3 changes: 2 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
[![Documentation Status](https://readthedocs.org/projects/tonic/badge/?version=latest)](https://tonic.readthedocs.io/en/latest/?badge=latest)
[![contributors](https://img.shields.io/github/contributors-anon/neuromorphs/tonic)](https://github.com/neuromorphs/tonic/pulse)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5079802.svg)](https://doi.org/10.5281/zenodo.5079802)
[![Discord](https://img.shields.io/discord/1044548629622439977)](https://discord.gg/qubbM4uPuA)

**Download and manipulate neuromorphic datasets fast and easily!**

Expand Down Expand Up @@ -53,4 +54,4 @@ how-tos/how-tos
reading_material/reading_material
getting_involved/getting_involved
about/about
```
```
2 changes: 1 addition & 1 deletion docs/reading_material/intro-snns.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ ANN is typically a
tensor with high data precision, but low temporal resolution, the input
for an SNN are
binary flags of spikes with comparatively high temporal precision in the
order of s. The unit in the SNN integrates all of the incoming spikes,
order of µs. The unit in the SNN integrates all of the incoming spikes,
which affect the internal parameters such as membrane potential. The
unit in the ANN
merely computes the linear combination for inputs on all synapses and
Expand Down
2 changes: 0 additions & 2 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ sphinx-book-theme
sphinx-gallery
myst_nb
pbr
torchvision
ipywidgets
matplotlib
torchdata
sphinx-autoapi
33 changes: 32 additions & 1 deletion test/test_datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,44 @@ class EBSSATestCase(dataset_utils.DatasetTestCase):
def inject_fake_data(self, tmpdir):
testfolder = os.path.join(tmpdir, "EBSSA")
os.makedirs(testfolder, exist_ok=True)
filename = "A5ooN9edo7TnNPx/download/labelled_ebssa.h5"
filename = "Jpw3Adae5kReMrN/download/labelled_ebssa.h5"
download_url(
url=base_url + filename, root=testfolder, filename="labelled_ebssa.h5"
)
return {"n_samples": 1}


class ThreeET_EyetrackingTestCase(dataset_utils.DatasetTestCase):
DATASET_CLASS = datasets.ThreeET_Eyetracking
FEATURE_TYPES = (datasets.ThreeET_Eyetracking.dtype,)
TARGET_TYPES = (np.ndarray,)
KWARGS = {"split": "train"}

def inject_fake_data(self, tmpdir):
testfolder = os.path.join(tmpdir, "ThreeET_Eyetracking")
os.makedirs(testfolder, exist_ok=True)
os.makedirs(os.path.join(testfolder, "data"), exist_ok=True)
os.makedirs(os.path.join(testfolder, "labels"), exist_ok=True)
# write one line of file name into train_files.txt under testfolder
os.system("echo testcase > " + os.path.join(testfolder, "train_files.txt"))
filename = "testcase"

# download test h5 file
download_url(
url=base_url + "4aiA4BAqz5km4Gc/download/" + filename + ".h5",
root=os.path.join(testfolder, "data"),
filename=filename + ".h5",
)
# # download test labels
download_url(
url=base_url + "G6ejNmXNnB2sKyc/download/" + filename + ".txt",
root=os.path.join(testfolder, "labels"),
filename=filename + ".txt",
)

return {"n_samples": 1}


class NCaltech101TestCase(dataset_utils.DatasetTestCase):
DATASET_CLASS = datasets.NCALTECH101
FEATURE_TYPES = (datasets.NCALTECH101.dtype,)
Expand Down
144 changes: 111 additions & 33 deletions test/test_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ def test_transform_drop_events_by_area(area_ratio):
break

assert (
dropped_area_found is True
dropped_area_found is True
), f"There should be an area with {dropped_events} events dropped in the obtained sequence."


Expand Down Expand Up @@ -223,6 +223,51 @@ def test_transform_drop_pixel(coordinates, hot_pixel_frequency):
assert events is not orig_events


@pytest.mark.parametrize(
"hot_pixel_frequency, event_max_freq",
[(59, 60), (10, 60)],
)
def test_transform_drop_pixel_unequal_sensor(hot_pixel_frequency, event_max_freq):
orig_events, sensor_size = create_random_input(
n_events=40000, sensor_size=(15, 20, 2)
)
orig_events = orig_events.tolist()
orig_events += [(0, 0, int(t * 1e3), 1) for t in np.arange(1, 1e6, 1e3 / event_max_freq)]
orig_events += [(0, 19, int(t * 1e3), 1) for t in np.arange(1, 1e6, 1e3 / event_max_freq)]
orig_events += [(14, 0, int(t * 1e3), 1) for t in np.arange(1, 1e6, 1e3 / event_max_freq)]
orig_events += [(14, 19, int(t * 1e3), 1) for t in np.arange(1, 1e6, 1e3 / event_max_freq)]
# cast back to numpy events
orig_events = np.asarray(orig_events, np.dtype([("x", int), ("y", int), ("t", int), ("p", int)]))

transform = transforms.DropPixel(
coordinates=None, hot_pixel_frequency=hot_pixel_frequency
)

events = transform(orig_events)
assert len(np.where((events["x"] == 0) & (events["y"] == 0))[0]) == 0
assert len(np.where((events["x"] == 14) & (events["y"] == 0))[0]) == 0
assert len(np.where((events["x"] == 0) & (events["y"] == 19))[0]) == 0
assert len(np.where((events["x"] == 14) & (events["y"] == 19))[0]) == 0


@pytest.mark.parametrize(
"coordinates, hot_pixel_frequency",
[(((9, 11), (10, 12), (11, 13)), None), (None, 10000)],
)
def test_transform_drop_pixel_empty(coordinates, hot_pixel_frequency):
orig_events, sensor_size = create_random_input(
n_events=0, sensor_size=(15, 20, 2)
)

transform = transforms.DropPixel(coordinates=None, hot_pixel_frequency=hot_pixel_frequency)
events = transform(orig_events)
assert len(events) == len(orig_events)

transform = transforms.DropPixel(coordinates=coordinates, hot_pixel_frequency=None)
events = transform(orig_events)
assert len(events) == len(orig_events)


@pytest.mark.parametrize(
"coordinates, hot_pixel_frequency",
[(((199, 11), (199, 12), (11, 13)), None), (None, 5000)],
Expand All @@ -247,7 +292,8 @@ def test_transform_drop_pixel_raster(coordinates, hot_pixel_frequency):
assert not merged_polarity_raster[merged_polarity_raster > 5000].sum().sum()


@pytest.mark.parametrize("time_factor, spatial_factor, target_size", [(1, 0.25, None), (1e-3, (1, 2), None), (1, 1, (5, 5))])
@pytest.mark.parametrize("time_factor, spatial_factor, target_size",
[(1, 0.25, None), (1e-3, (1, 2), None), (1, 1, (5, 5))])
def test_transform_downsample(time_factor, spatial_factor, target_size):
orig_events, sensor_size = create_random_input()

Expand All @@ -256,43 +302,42 @@ def test_transform_downsample(time_factor, spatial_factor, target_size):
)

events = transform(orig_events)

if not isinstance(spatial_factor, tuple):
spatial_factor = (spatial_factor, spatial_factor)

if target_size is None:
assert np.array_equal(
(orig_events["t"] * time_factor).astype(orig_events["t"].dtype), events["t"]
)
assert np.array_equal(np.floor(orig_events["x"] * spatial_factor[0]), events["x"])
assert np.array_equal(np.floor(orig_events["y"] * spatial_factor[1]), events["y"])

else:
spatial_factor_test = np.asarray(target_size) / sensor_size[:-1]
assert np.array_equal(np.floor(orig_events["x"] * spatial_factor_test[0]), events["x"])
assert np.array_equal(np.floor(orig_events["y"] * spatial_factor_test[1]), events["y"])

assert events is not orig_events
@pytest.mark.parametrize("target_size, dt, downsampling_method, noise_threshold, differentiator_time_bins",


@pytest.mark.parametrize("target_size, dt, downsampling_method, noise_threshold, differentiator_time_bins",
[((50, 50), 0.05, 'integrator', 1, None),
((20, 15), 5, 'differentiator', 3, 1)])
def test_transform_event_downsampling(target_size, dt, downsampling_method, noise_threshold,
def test_transform_event_downsampling(target_size, dt, downsampling_method, noise_threshold,
differentiator_time_bins):

orig_events, sensor_size = create_random_input()
transform = transforms.EventDownsampling(sensor_size=sensor_size, target_size=target_size, dt=dt,

transform = transforms.EventDownsampling(sensor_size=sensor_size, target_size=target_size, dt=dt,
downsampling_method=downsampling_method, noise_threshold=noise_threshold,
differentiator_time_bins=differentiator_time_bins)

events = transform(orig_events)

assert len(events) <= len(orig_events)
assert np.logical_and(np.all(events["x"] <= target_size[0]), np.all(events["y"] <= target_size[1]))
assert events is not orig_events


@pytest.mark.parametrize("target_size", [(50, 50), (10, 5)])
def test_transform_random_crop(target_size):
Expand Down Expand Up @@ -465,13 +510,13 @@ def test_transform_spatial_jitter(variance, clip_outliers):
assert np.isclose(events["y"].all(), orig_events["y"].all(), atol=2 * variance)

assert (
events["x"] - orig_events["x"]
== (events["x"] - orig_events["x"]).astype(int)
events["x"] - orig_events["x"]
== (events["x"] - orig_events["x"]).astype(int)
).all()

assert (
events["y"] - orig_events["y"]
== (events["y"] - orig_events["y"]).astype(int)
events["y"] - orig_events["y"]
== (events["y"] - orig_events["y"]).astype(int)
).all()

else:
Expand Down Expand Up @@ -503,8 +548,8 @@ def test_transform_time_jitter(std, clip_negative, sort_timestamps):
np.testing.assert_array_equal(events["y"], orig_events["y"])
np.testing.assert_array_equal(events["p"], orig_events["p"])
assert (
events["t"] - orig_events["t"]
== (events["t"] - orig_events["t"]).astype(int)
events["t"] - orig_events["t"]
== (events["t"] - orig_events["t"]).astype(int)
).all()
assert events is not orig_events

Expand Down Expand Up @@ -562,17 +607,7 @@ def test_transform_time_skew(coefficient, offset):
assert events is not orig_events


@pytest.mark.parametrize(
"n",
[
100,
0,
(
10,
100,
),
],
)
@pytest.mark.parametrize("n", [100, 0, (10, 100)])
def test_transform_uniform_noise(n):
orig_events, sensor_size = create_random_input()

Expand All @@ -597,6 +632,16 @@ def test_transform_uniform_noise(n):
assert events is not orig_events


@pytest.mark.parametrize("n", [100, 0, (10, 100)])
def test_transform_uniform_noise_empty(n):
orig_events, sensor_size = create_random_input(n_events=0)
assert len(orig_events) == 0

transform = transforms.UniformNoise(sensor_size=sensor_size, n=n)
events = transform(orig_events)
assert len(events) == 0 # check returns an empty array, independent of n.


def test_transform_time_alignment():
orig_events, sensor_size = create_random_input()

Expand All @@ -606,3 +651,36 @@ def test_transform_time_alignment():

assert np.min(events["t"]) == 0
assert events is not orig_events


def test_toframe_empty():
orig_events, sensor_size = create_random_input(n_events=0)
assert len(orig_events) == 0

with pytest.raises(ValueError): # check that empty array raises error if no slicing method is specified
transform = transforms.ToFrame(sensor_size=sensor_size)
frame = transform(orig_events)

n_event_bins = 100
transform = transforms.ToFrame(sensor_size=sensor_size, n_event_bins=n_event_bins)
frame = transform(orig_events)
assert frame.shape == (n_event_bins, sensor_size[2], sensor_size[0], sensor_size[1])
assert frame.sum() == 0

n_time_bins = 100
transform = transforms.ToFrame(sensor_size=sensor_size, n_time_bins=n_time_bins)
frame = transform(orig_events)
assert frame.shape == (n_time_bins, sensor_size[2], sensor_size[0], sensor_size[1])
assert frame.sum() == 0

event_count = 1e3
transform = transforms.ToFrame(sensor_size=sensor_size, event_count=event_count)
frame = transform(orig_events)
assert frame.shape == (1, sensor_size[2], sensor_size[0], sensor_size[1])
assert frame.sum() == 0

time_window = 1e3
transform = transforms.ToFrame(sensor_size=sensor_size, time_window=time_window)
frame = transform(orig_events)
assert frame.shape == (1, sensor_size[2], sensor_size[0], sensor_size[1])
assert frame.sum() == 0
Loading

0 comments on commit 43fcbe1

Please sign in to comment.