All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
e3nn.Irreps.mul_gcd
- rewrite
e3nn.tensor_square
to be simpler (and faster?) - use
jax.scipy.special.lpmn_values
to implemente3nn.legendre
. Faster on GPU and supports reverse-mode differentiation. - [BREAKING] Change the output format of
e3nn.legendre
- Fix missing support for zero flags in
e3nn.elementwise_tensor_product
- [BREAKING] Move
Instruction
,FunctionalTensorProduct
andFunctionalFullyConnectedTensorProduct
intoe3nn.legacy
submodule - Reimplement
e3nn.tensor_product
ande3nn.elementwise_tensor_product
in a simpler way
e3nn.utils.vmap
to propagatezero_flags
in the vectorized function.
- Simplify the tetris examples
- Example of what is fixed: assume
x.ndim = 2
, allowx[:, None]
but preventx[:, :, None]
andx[..., None]
- [BREAKING]
e3nn.flax.Linear
ande3nn.haiku.Linear
now don't output the impossible irreps anymore. To force the output of all irreps, useforce_irreps_out = True
. For instancee3nn.flax.Linear("0e + 1o")("0e")
will now return"0e"
instead of"0e + 1o"
. - [BREAKING]
e3nn.utils.assert_equivariant
has the same signature ase3nn.utils.equivariance_test
- [BREAKING] Move
as_irreps_array
,zeros
andzeros_like
frome3nn.IrrepsArray
toe3nn
- [BREAKING] Move
IrrepsArray.from_list
toe3nn.from_chunks
- [BREAKING] Rename
IrrepsArray.list
intoIrrepsArray.chunks
- [BREAKING] Rename
IrrepsArray.remove_nones
intoIrrepsArray.remove_zero_chunks
e3nn.IrrepsArray
has now only.array
as data attribute.
e3nn.IrrepsArray.rechunk
e3nn.IrrepsArray.zero_flags
a tuple of bools that indicates which chunks are zero
- [BREAKING] Renamed
e3nn.util
ine3nn.utils
Irreps.set_mul(int)
to set the multiplicity of all irrepsIrreps.filter(lmax=int)
to filter out irreps withl > lmax
IrrepsArray.filter(lmax=int)
to filter out irreps withl > lmax
IrrepsArray.__radd__
andIrrepsArray.__rsub__
to supportscalar + IrrepsArray
andscalar - IrrepsArray
0 + IrrepsArray
and0 - IrrepsArray
are now always accepted as special cases.- Support for
IrrepsArray / array
- Add
utils
as a submodule
e3nn.scatter
operation handle indices withndim > 1
e3nn.cross
for completeness
- Optimize
e3nn.reduced_symmetric_tensor_product_basis
, especially for thekeep_ir
argument
LinearSHTP
module implementing the optimized linear mixing of inputs tensor product with spherical harmonicsD_from_axis_angle
to_s2grid
:quadrature="gausslegendre"
by defaultsoft_odd
activation function for odd scalars- more support of arrays implicitely converted into
IrrepsArray
as scalars (i.e. added fewIrrepsArray.as_irreps_array
)
scalar_activation
simpler to use with default activation functions (a bit like gate)
e3nn.normalize_function
now uses a deterministic (not pseudorandom) algorithm to compute the normalization factor.
normalize_act
option toe3nn.scalar_activation
ande3nn.gate
. We can now turn the normalization off if we want to.e3nn.norm_activation
as a new activation function.
- Fix
NaN
in the gradients ofe3nn.xyz_to_angles
. The gradients are now0
when the input is on the poles.
e3nn.dot
: compute the dot product between twoIrrepsArray
per_irrep
argument toe3nn.norm
: compute the norm of each irrep independently ifper_irrep=True
e3nn.tensor_product_with_spherical_harmonics
from https://arxiv.org/pdf/2302.03655.pdf
__repr__(Irreps())
has been changed from""
to"Irreps()"
- spherical harmonics edge case when
output_irreps=Irreps()
e3nn.SphericalSignal.sample
to sample a point on the spheree3nn.scatter_max
- [BREAKING] Removed
e3nn.s2_sum_of_diracs
in favor ofe3nn.s2_dirac
- [BREAKING]
e3nn.grad
now regroups the output by default. It can be disabled withregroup_output=False
e3nn.SphericalSignal
arithmetic operationse3nn.Irreps.D_from_angles
computes (again!) the Wigner D matrices using the J matrices for L <= 11. This is faster and more accurate than using the expm.
e3nn.SphericalSignal
class to represent signals on the sphereSignal on the Sphere
section in the documentatione3nn.Irreps.D_from_log_coordinates
rotation_angle_from_*
functionse3nn.to_s2point
function
- Wigner D matrices are computed from the log coordinates which makes 1 instead of 3 calls to
expm
. - [BREAKING]
e3nn.util.assert_output_dtype
renamed toe3nn.util.assert_output_dtype_matches_input_dtype
- [BREAKING] Update
experimental.point_convolution
to use the last changes. - [BREAKING] changed the
e3nn.to_s2grid
ande3nn.from_s2grid
signature and default normalization.
- [BREAKING] All the
haiku
modules from the main module. They are now in thee3nn.haiku
submodule. - [BREAKING]
e3nn.wigner_D
in favor ofe3nn.Irrep.D_from_*
- Removed
jax.jit
decorator toIrreps.D_from_*
that was causing a bug.
e3nn.s2grid_vectors
ande3nn.pad_to_plot_on_s2grid
to help plotting signals on the spheree3nn.util.assert_output_dtype
to check the output dtype of a functione3nn.s2_irreps
is a function to create the irreps of the coefficients of a signal on the spheree3nn.reduced_antisymmetric_tensor_product_basis
to compute the basis of the reduced antisymmetric tensor productIrrepsArray * scalar
is supported if the number of scalars matches the number of irreps
- Optimize the
reduced_symmetric_tensor_product
. It is now up to 100x faster than the previous implementation. e3nn.from_s2grid
ande3nn.to_s2grid
are now more flexible with input and output irreps, you can skip some l's and have them in any order- [BREAKING]
e3nn.from_s2grid
requires andirreps
argument instead of almax
argument
- Increase robusteness of
e3nn.spherical_harmonics
towardsnan
whennormalize=True
IrrepsArray.astype
to cast the underlying arraye3nn.flax.MultiLayerPerceptron
ande3nn.haiku.MultiLayerPerceptron
e3nn.IrrepsArray.from_list(..., dtype)
- Add sparse tensor product as an option in
e3nn.tensor_product
and related functions. It sparsify the clebsch gordan coefficients. It has more inpact whenfused=True
. It is disabled by default because no improvement was observed in the benchmarks. - Add
log_coordinates
along the other parameterizations of SO(3).e3nn.log_coordinates_to_matrix
,e3nn.rand_log_coordinates
, etc.
- set dtype for all
jnp.zeros(..., dtype)
calls in the codebase - set dtype for all
jnp.ones(..., dtype)
calls in the codebase
- [BREAKING]
e3nn.full_tensor_product
in favor ofe3nn.tensor_product
- [BREAKING]
e3nn.FunctionalTensorSquare
in favor ofe3nn.tensor_square
- [BREAKING]
e3nn.TensorSquare
in favor ofe3nn.tensor_square
- [BREAKING]
e3nn.IrrepsArray.cat
in favor ofe3nn.concatenate
- [BREAKING]
e3nn.IrrepsArray.randn
in favor ofe3nn.normal
- [BREAKING]
e3nn.Irreps.randn
in favor ofe3nn.normal
- [BREAKING]
e3nn.Irreps.transform_by_*
in favor ofe3nn.IrrepsArray.transform_by_*
- moves
BatchNorm
andDropout
toe3nn.haiku
submodule, will remove them from the main module in the future. - move
e3nn.haiku.FullyConnectedTensorProduct
inhaiku
submodule. Undeprecate it because it's faster thane3nn.tensor_product
followed bye3nn.Linear
. This is becauseopteinsum
optimizes the contraction of the two operations.
e3nn.scatter_sum
to replacee3nn.index_add
.e3nn.index_add
is deprecated.- add
flax
andhaiku
submodules. Plan to migrate all modules toflax
andhaiku
in the future. - Implement
e3nn.flax.Linear
and movee3nn.Linear
ine3nn.haiku.Linear
.
- [BREAKING]
3 * e3nn.Irreps("0e + 1o")
now returns3x0e + 3x1o
instead of1x0e + 1x1o + 1x0e + 1x1o + 1x0e + 1x1o
- [BREAKING] in Linear, renamed
num_weights
tonum_indexed_weights
because it was confusing.
e3nn.Irreps("3x0e + 6x1o") // 3
returns1x0e + 2x1o
s2grid
is now jitable
e3nn.Irreps.regroup
ande3nn.IrrepsArray.regroup
to regroup irreps. Equivalent tosort
followed bysimplify
.- add
regroup_output
parameter toe3nn.tensor_product
ande3nn.tensor_square
to regroup the output irreps.
e3nn.IrrepsArray.convert
is now private (e3nn.IrrepsArray._convert
) because it's recommended to other methods instead.- breaking change use
input.regroup()
ine3nn.Linear
which can change the structure of the parameters dictionary. - breaking change
regroup_output
isTrue
by default ine3nn.tensor_product
ande3nn.tensor_square
. - To facilitate debugging, if not
key
is provided toe3nn.normal
it will use the hash of the irreps. - breaking change changed normalization of
e3nn.tensor_square
in the case ofnormalized_input=True
- Deprecate
e3nn.TensorSquare
e3nn.Linear
now supports integer "weights" inputs.e3nn.Linear
now supportsname
argument.- Add
.dtype
toIrrepsArray
to get the dtype of the underlying array.
e3nn.MultiLayerPerceptron
names its layerslinear_0
,linear_1
, etc.
- s2grid:
e3nn.from_s2grid
ande3nn.to_s2grid
thanks to @songk42 for the contribution - argument
max_order: int
to functionreduced_tensor_product_basis
to be able to limit the polynomial order of the basis MultiLayerPerceptron
acceptsIrrepsArray
as input and outpute3nn.Linear
accepts optional weights as arguments that will be internally mixed with the free parameters. Very usefyul to implement the depthwise convolution
- breaking change
e3nn.normal
has a new argument to get normalized vectors. - breaking change
e3nn.tensor_square
now distinguishes betweennormalization=norm
andnormalized_input=True
.
e3nn.SymmetricTensorProduct
operation: a parameterized version ofx + x^2 + x^3 + ...
.e3nn.soft_envelope
a smoothC^inf
envelope radial function.e3nn.tensor_square
Irrep.generators
andIrreps.generators
functions to get the generators of the representations.e3nn.bessel
functionslice_by_mul
,slice_by_dim
andslice_by_chunk
functions toIrreps
andIrrepsArray
- breaking change
e3nn.soft_one_hot_linspace
does not supportbessel
anymore. Usee3nn.bessel
instead. e3nn.gate
is now more flexible of the input format, see examples in the docstring.
- breaking change
IrrepsArray.split
- fix
IrrepsArray.zeros().at[...].add
e3nn.reduced_symmetric_tensor_product_basis(irreps: Irreps, order: int)
e3nn.IrrepsArray.filtered(keep: List[Irrep])
e3nn.reduced_tensor_product_basis(formula_or_irreps_list: Union[str, List[e3nn.Irreps]], ...)
IrrepsArray.at[i].set(v)
andIrrepsArray.at[i].add(v)
- add
Irreps.is_scalar
- Simple irreps indexing of
IrrepsArray
: likex[..., "10x0e"]
but notx[..., "0e + 1e"]
e3nn.concatenate, e3nn.mean, e3nn.sum
e3nn.norm
forIrrepsArray
e3nn.tensor_product
e3nn.normal
- Better support of
+ - * /
operators forIrrepsArray
- Add new operator
e3nn.grad
: it takes anIrrepsArray -> IrrepsArray
function and returns aIrrepsArray -> IrrepsArray
function - Add support of operator
IrrepsArray ** scalar
- Add support of
x[..., 3:6]
forIrrepsArray
- Add
e3nn.reduced_tensor_product_basis
- Add
e3nn.stack
IrrepsArray.cat
is now deprecated and replaced bye3nn.concatenate
e3nn.full_tensor_product
is now deprecated and replaced bye3nn.tensor_product
e3nn.FullyConnectedTensorProduct
is now deprecated in favor ofe3nn.tensor_product
ande3nn.Linear
- breaking change remove
IrrepsArray.from_any
- breaking change remove option
optimize_einsums
, (it is now alwaysTrue
)
- breaking change rewrite the
equivariance_error
andassert_equivariant
functions
- breaking change change the ordering of
Irrep
. Now it matches withIrrep.iterator
. - breaking change
Irrep("1e") == "1e"
andIrreps("1e + 2e") == "1e + 2e"
are nowTrue
. - breaking change
Linear
simplify theirreps_out
which might cause reshape of the parameters. index_add
supportsIrrepArray
- broadcast for
Linear
- argument
channel_out
toLinear
for convenience Irreps
can be created from aMulIrrep
"0e" + Irreps("1e")
is now supported"0e" + Irrep("1e")
is now supportedmap_back
argument toindex_add
IrrepsArray.split(list of irreps)
poly_envelope
function
- breaking change rename
IrrepsData
intoIrrepsArray
- breaking change
IrrepsArray.shape
is now equal tocontiguous.shape
(instead ofcontiguous.shape[:-1]
) - breaking change
IrrepsArray * array
requiresarray.shape[-1]
to be 1 orarray
to be a scalar - breaking change
IrrepsArray.contiguous
is renamed inIrrepsArray.array
- breaking change
IrrepsArray.new
is renamed inIrrepsArray.from_any
spherical_harmonics
normalization is now set tocomponent
like everything else.
- breaking change
IrrepsArray.from_contiguous
is removed. UseIrrepsArray(irreps, array)
instead.
- add
e3nn.config
to set global defaults parameters __getindex__
toIrrepsData
gradient_normalization
argument that can beelement
orpath
path_normalization
can be a number between 0 and 1- add nearest interpolation for
zoom
, default is linear - implement
custom_jvp
for spherical harmonics
- Docker image
- add the
sh
function that does not useIrrepsData
as input/output legendre
algorithm to compute spherical harmonics- add flag
algorithm
to specify the algorithm to use for computing spherical harmonics, uselegendre
for large L. experimental.voxel_convolution
: add optional dynamic steps (not static for jit)
- fix a bug in
experimental.voxel_convolution
constructor
- Function
matrix
toFunctionalLinear
experimental.voxel_convolution
:padding
and add self-connection into the convolution kernelexperimental.voxel_pooling
: addoutput_size
argument to thezoom
functionIrrepsData
:list
attribute is now lazily initializedexperimental.voxel_convolution
: add possibility to have different radial functions depenfing on the spherical harmonic degree
- Behavior of
eps
inBatchNorm
. Nowinput / sqrt((1 - eps) * norm^2 + eps)
instead ofinput / sqrt(norm^2 + eps)
- Optimized
spherical_harmonics
by decomposing the order in powers of 2. It is supposed to improve stability because less operations are performed for high orders. It improves the performance when computing a single order. - Optimized
spherical_harmonics
by using dense matrix multiplication instead of sparse matrix multiplication.
- add
loop
argument toradius_graph
- use
dataclasses.dataclass
instead of customdataclass
- Get Clebsch-Gordan coefficients from qutip and a change of basis
- Add
start_zero
andend_zero
arguments to functionsoft_one_hot_linspace
IrrepsData
can be given as argument ofspherical_harmonics
- added broadcasting of
IrrepsData
,elementwise_tensor_product
,FullyConnectedTensorProduct
,full_tensor_product
BatchNorm
supports NoneBatchNorm
supports change default value ofeps
from1e-5
to1e-4
gate
change default odd activation to (1 - exp(x^2)) * x
gate
list of activations argument is now optionalexperimental.transformer.Transformer
simplified interface usingIrrepsData
and swap two arguments order
IrrepsData.repeat_irreps_by_last_axis
IrrepsData.repeat_mul_by_last_axis
IrrepsData.factor_mul_to_last_axis
- add
axis
argument toIrrepsData.cat
IrrepsData.remove_nones
IrrepsData.ones
experimental.point_convolution.Convolution
simplified interface usingIrrepsData
- Changelog