-
Notifications
You must be signed in to change notification settings - Fork 76
Version 2.0.7 #564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Version 2.0.7 #564
Conversation
* add log_gamma diagnostic * add missing export for log_gamma * add missing export for gamma_null_distribution, gamma_discrepancy * fix broken unit tests * rename log_gamma module to sbc * add test_log_gamma unit test * add return information to log_gamma doc string * fix typo in docstring, use fixed-length np array to collect log_gammas instead of appending to an empty list
…525) * standardization: add test for multi-input values (failing) This test reveals to bugs in the standarization layer: - count is updated multiple times - batch_count is too small, as the sizes from reduce_axes have to be multiplied * breaking: fix bugs regarding count in standardization layer Fixes #524 This fixes the two bugs described in c4cc133: - count was accidentally updated, leading to wrong values - count was calculated wrongly, as only the batch size was used. Correct is the product of all reduce dimensions. This lead to wrong standard deviations While the batch dimension is the same for all inputs, the size of the second dimension might vary. For this reason, we need to introduce an input-specific `count` variable. This breaks serialization. * fix assert statement in test
* fix numerical stability issues in sinkhorn plan * improve test suite * fix ultra-strict convergence criterion in log_sinkhorn_plan * update dependencies * add comment about convergence check * update docsting to reflect fixes * sinkhorn_plan now returns a transport plan with uniform marginal distributions * add unit test for sinkhorn_plan * fix sinkhorn function by sampling from the logits of the transpose of the plan, instead of the plan directly * sinkhorn(x1, x2) now samples from log(plan) to receive assignments such that x2[assignments] matches x1 * re-enable test_assignment_is_optimal() for method='sinkhorn' * log_sinkhorn now correctly uses log_plan instead of keras.ops.exp(log_plan), log_sinkhorn_plan returns logits of the transport plan * add unit tests for log_sinkhorn_plan * fix faulty indexing with tensor for tensorflow backend * re-add numItermax for ot pot test --------- Co-authored-by: Daniel Habermann <133031176+daniel-habermann@users.noreply.github.com>
…he-broadcast-transformation Fix of broadcast serialization bug
* Pass correct training stage in CouplingFlow.compute_metrics * Pass correct training stage in CIF and PointInferenceNetwork
* Custom test quantity support for calibration_ecdf * rename variable [no ci] * Consistent defaults for variable_keys/names in calibration_ecdf with test quantiles * Tests for calibration_ecdf with test_quantities * Remove redundant and simplify comments * Fix docstrings and typehints --------- Co-authored-by: stefanradev93 <stefan.radev93@gmail.com>
* fix test_calibration_log_gamma_end_to_end unit test failing too often than expected * set alpha to 0.1% in binom.ppf * fix typo in comment
* Remove stateful adapter features * Fix tests * Fix typo * Remove nnpe from adapter * Bring back notes [skip ci] * Remove unncessary restriction to kwargs only [skip ci] * Remove old super call [skip ci] * Robustify type [skip ci] * remove standardize from multimodal sim notebook [no ci] * add draft module docstring to augmentations module [no ci] Feel free to modify. * adapt and run neurocognitive modeling notebook [no ci] * adapt cCM playground notebook [no ci] * adapt signature of Adapter.standardize * add parameters missed in previous commit * Minor NNPE polishing * remove stage in docstring from OnlineDataset --------- Co-authored-by: Lasse Elsemüller <60779710+elseml@users.noreply.github.com> Co-authored-by: Valentin Pratz <git@valentinpratz.de>
The function was renamed to summarize in v2.0.4.
Stabilizes the DiffusionModel class. A deprecation warning for the DiffusionModel class in the experimental module was added.
* added citation for resnet * minor formatting --------- Co-authored-by: Valentin Pratz <git@valentinpratz.de>
Move DiffusionModel from experimental to networks module
… dev [skip ci]
* improvements to diagnostics plots add markersize parameter, add tests, support dataset_id for pairs_samples Fixes #554. * simplify test_calibration_ecdf_from_quantiles
Add pairs_quantity and plot_quantity functions that allow plotting of quantities that can be calculated for each individual dataset. Currently, for the provided metrics this is only useful for posterior contraction, but could be useful for posterior z-score and other quantities as well.
…riant module (#557) (#561) * adapt output dim of invariant module in equivariant module See #557. The DeepSet showed bad performance and was not able to learn diverse summary statistics. Reducing the dimension of the output of the invariant module inside the equivariant module improves this, probably because the invidividual information of each set member gains importance compared to the shared information provided by the invariant module. There might be better settings for this, so we might update the default later on. However, this is already an improvement over the previous setting. * DeepSet: adapt docstring to reflect code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good timing for a new release.
Thanks for preparing the release, @stefanradev93 ! It looks like some commits are showing up that are already in the 2.0.6 release from the 19th of July. I'm not sure why this is the case, maybe because we used a dedicated release branch for that. Could you adjust the PR/release message to only include the changes after that date? |
Done. |
* add fm schedule * add fm schedule * add comments * expose time_power_law_alpha * Improve doc [skip ci] --------- Co-authored-by: stefanradev93 <stefan.radev93@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks good to me.
for DiffusionModel and FlowMatching. Euler shows significant deviations when computing the log-prob, which risks misleading users regarding the performance of the networks. rk45 is slower, but the problem is heavily reduced with this method.
Some notebooks need updating for the markersize changes, I'll send a fix soon. |
fix nan to num inverse
Add warning if KERAS_BACKEND and actually loaded backend do not match. This can happen if keras is imported before BayesFlow.
This PR bundles multiple updates including:
Distributions
Diagnostics
height
argument inpairs_posterior()
#562)General improvements
DeepSet
Residual
network serializationMisc