Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updates for next rebuild cycle #9

Merged
merged 6 commits into from
Nov 4, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: test

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
test-w-conda-recipe:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: setup conda
uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
auto-activate-base: true
activate-environment: ""
- name: linux conda build test
shell: bash -l {0}
run: |
conda install -n base -c conda-forge conda-build -y
conda build -c conda-forge conda-recipe
18 changes: 0 additions & 18 deletions .travis.yml

This file was deleted.

4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[![Build Status](https://travis-ci.org/ilastik/ilastikrag.svg?branch=master)](https://travis-ci.org/ilastik/ilastikrag)
[![test](https://github.com/ilastik/ilastikrag/actions/workflows/test.yaml/badge.svg)](https://github.com/ilastik/ilastikrag/actions/workflows/test.yaml)

ilastikrag
==========
Expand All @@ -11,7 +11,7 @@ Installation
------------

```bash
conda install -c stuarteberg ilastikrag
conda install -c ilastik-forge ilastikrag
```

[Documentation]: http://stuarteberg.github.io/ilastikrag
Expand Down
10 changes: 0 additions & 10 deletions conda-recipe/conda_build_config.yaml

This file was deleted.

18 changes: 9 additions & 9 deletions conda-recipe/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,21 @@ source:

build:
noarch: python
number: 1001
string: py_{{PKG_BUILDNUM}}_h{{PKG_HASH}}_g{{GIT_FULL_HASH[:7]}}
number: 0
string: py_{{PKG_BUILDNUM}}_g{{GIT_FULL_HASH[:7]}}
script: python -m pip install --no-deps --ignore-installed .

requirements:
build:
- python {{ python }}
- python >=3.7
- pip
run:
- {{ pin_compatible('python') }}
- {{ pin_compatible('numpy') }}
- {{ pin_compatible('h5py') }}
- {{ pin_compatible('pandas', lower_bound='0.16', upper_bound='0.25') }}
- vigra >={{ vigra }}
- {{ pin_compatible('networkx') }}
- python >=3.7
- numpy >=1.12,<1.19
- h5py
- pandas >=0.25
- vigra >=1.11
- networkx >=1.11

test:
requires:
Expand Down
16 changes: 7 additions & 9 deletions dev/environment-dev.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,11 @@
name: ilastikrag-dev2
name: ilastikrag-dev
channels:
- defaults
- conda-forge
- ilastik-forge
dependencies:
- h5py=2.9*
- networkx=2.2*
- numpy=1.14*
- pandas=0.24*
- h5py
- networkx
- numpy>=1.12,<1.20
- pandas
- pytest
- python=3.7*
- vigra=1.11*
- python>=3.7
- vigra
Original file line number Diff line number Diff line change
Expand Up @@ -88,19 +88,12 @@ def ingest_edges(self, rag, edge_values):
ndim = len(self._dense_axiskeys)
covariance_matrices_array = np.zeros( (num_edges, ndim, ndim), dtype=np.float32 )

group_index = [-1]
group_index = [0]
def write_covariance_matrix(group_df):
"""
Computes the covariance matrix of the given group,
and writes it into the pre-existing covariance_matrices_array.
"""
# There's one 'gotcha' to watch out for here:
# GroupBy.apply() calls this function *twice* for the first group.
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html
if group_index[0] < 0:
group_index[0] += 1
return None

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why it makes sense to remove these lines. According to the docs linked in the comments, GroupBy.apply() retains the same quirky behavior in pandas 1.x that it always had: it processes the first group twice. Since this function stores its results via a side-effect, we have to "skip" the first pass, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Admittedly, relying on side-effects is generally an awkward solution I'd usually prefer to avoid. That's exactly what is complicating this analysis. There might be a smarter way to implement this function, but I don't have time to think about it now. (When I originally wrote this code, I wasn't super well-versed in the pandas API.)

Copy link
Author

@k-dominik k-dominik Nov 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

phew, good that you had a look. I have to admit that I based the removal on the 0.25 release notes:
https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.25.0.html#groupby-apply-on-dataframe-evaluates-first-group-only-once

I experimentally verified that the behavior with pandas 1.3.4 is consistent with the fix announced in 0.25...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. Okay, thanks and sorry for the noise. The pandas documentation seems to be incorrect! I tried to search through the history of this issue on the pandas issue tracker, and became hopelessly confused.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got also confused, so I resorted to brute force:

in the table the "count" indicates how often the function was called (same example as in the 0.25 release notes). 3 means the some item is evaluated twice, 2 means that weird behavior is gone...

build count
0.23.4=py37h637b7d7_1000 3
0.23.4=py37hf8a1672_0 3
0.24.0=py37hf484d3e_0 3
0.24.1=py37hf484d3e_0 3
0.24.2=py37hb3f55d8_0 3
0.24.2=py37hb3f55d8_1 3
0.24.2=py37hf484d3e_0 3
0.25.0=py37hb3f55d8_0 2
0.25.1=py37hb3f55d8_0 2
0.25.2=py37hb3f55d8_0 2
0.25.3=py37hb3f55d8_0 2
1.0.0=py37hb3f55d8_0 2
1.0.1=py37hb3f55d8_0 2
1.0.2=py37h0da4684_0 2
1.0.3=py37h0da4684_0 2
1.0.3=py37h0da4684_1 2
1.0.4=py37h0da4684_0 2
1.0.5=py37h0da4684_0 2
1.1.0=py37h3340039_0 2
1.1.1=py37h3340039_0 2
1.1.2=py37h3340039_0 2
1.1.3=py37h3340039_0 2
1.1.3=py37h9fdb41a_1 2
1.1.3=py37h9fdb41a_2 2
1.1.3=py37hb33c840_2 2
1.1.4=py37h10a2094_0 2
1.1.5=py37hdc94413_0 2
1.2.0=py37h40f5888_1 2
1.2.0=py37hdc94413_0 2
1.2.0=py37hdc94413_1 2
1.2.1=py37h40f5888_0 2
1.2.1=py37hdc94413_0 2
1.2.2=py37h40f5888_0 2
1.2.2=py37hdc94413_0 2
1.2.3=py37h40f5888_0 2
1.2.3=py37hdc94413_0 2
1.2.4=py37h219a48f_0 2
1.2.4=py37h40f5888_0 2
1.2.5=py37h219a48f_0 2
1.2.5=py37h40f5888_0 2
1.3.0=py37h219a48f_0 2
1.3.1=py37h219a48f_0 2
1.3.1=py37h40f5888_0 2
1.3.2=py37h40f5888_0 2
1.3.2=py37he8f5f7f_0 2
1.3.3=py37h40f5888_0 2
1.3.3=py37he8f5f7f_0 2
1.3.4=py37h40f5888_0 2
1.3.4=py37he8f5f7f_0 2

# Compute covariance.
# Apparently computing it manually is much faster than group_df.cov()
group_vals = group_df.values.astype(np.float32, copy=False)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,24 +47,17 @@ def _compute_correlation_feature(self, rag, value_img):
z_edge_ids.sort(1)

values_df = pd.DataFrame(z_edge_ids, columns=['sp1', 'sp2'])
values_df['left_values'] = value_img[:-1].reshape(-1)
values_df['right_values'] = value_img[1:].reshape(-1)
values_df['left_values'] = np.array(value_img[:-1].reshape(-1))
values_df['right_values'] = np.array(value_img[1:].reshape(-1))

correlations = np.zeros( len(self._final_df), dtype=np.float32 )

group_index = [-1]
group_index = [0]
def write_correlation(group_df):
"""
Computes the correlation between 'left_values' and 'right_values' of the given group,
and writes it into the pre-existing correlations_array.
"""
# There's one 'gotcha' to watch out for here:
# GroupBy.apply() calls this function *twice* for the first group.
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html
if group_index[0] < 0:
group_index[0] += 1
return None

# Compute
covariance = np.cov(group_df['left_values'].values, group_df['right_values'].values)
denominator = np.sqrt(covariance[0,0]*covariance[1,1])
Expand Down
8 changes: 4 additions & 4 deletions ilastikrag/tests/test_edgeregion_accumulator.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,14 +48,14 @@ def test1(self):
debug_sp[superpixels == sp1] = 128
for sp2 in problem_df['sp2'].values:
debug_sp[superpixels == sp2] = 255

vigra.impex.writeImage(debug_sp, '/tmp/debug_sp.png', dtype='NATIVE')

# The first axes should all be close.
# The second axes may differ somewhat in the case of purely linear edges,
# so we allow a higher tolerance.
assert np.isclose(radii[:,0], transposed_radii[:,0]).all()
assert np.isclose(radii[:,1], transposed_radii[:,1], atol=0.001).all()
np.testing.assert_allclose(radii[:, 0], transposed_radii[:, 0], rtol=1e-4, atol=1e-2)
np.testing.assert_allclose(radii[:, 1], transposed_radii[:, 1], rtol=1e-4, atol=1e-2)

def test2(self):
superpixels = np.zeros((10, 10), dtype=np.uint32)
Expand Down
8 changes: 4 additions & 4 deletions ilastikrag/tests/test_standard_accumulators.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,14 @@ def test_sp_features_no_histogram(self):

# sp sum features ought to be normalized, too...
for _index, sp1, sp2, sp_sum_sum, sp_sum_difference in features_df.itertuples():
np.testing.assert_almost_equal(
np.testing.assert_allclose(
sp_sum_sum,
np.power(sp1*sp_counts[sp1] + sp2*sp_counts[sp2], 1./superpixels.ndim).astype(np.float32),
decimal=6)
np.testing.assert_almost_equal(
rtol=10e-4)
np.testing.assert_allclose(
sp_sum_difference,
np.power(np.abs(sp1*sp_counts[sp1] - sp2*sp_counts[sp2]), 1./superpixels.ndim).astype(np.float32),
decimal=6)
rtol=10e-4)

# MEAN
features_df = rag.compute_features(values, ['standard_sp_mean'])
Expand Down