-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/partial sum #114
Feature/partial sum #114
Conversation
Interesting, tests are fine on my side.. checking again. |
e1f1fb2
to
da57e28
Compare
Just realized, this (partially) closes #47 |
da57e28
to
cbc070b
Compare
dwave/optimization/include/dwave-optimization/nodes/mathematical.hpp
Outdated
Show resolved
Hide resolved
3aa9414
to
7bee0fd
Compare
const ssize_t axis); | ||
|
||
/// Gets the strides of a n-dimensional array assuming contiguous memory | ||
std::vector<ssize_t> as_contiguous_strides(const std::span<const ssize_t> shape); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This duplicates the functionality of the existing shape_to_strides() static method. Though of course it has a slightly different signature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved this behind compilation barrier in mathematical.cpp
.
Ok I tried to make the code ready to support multiple axes in the future. I pushed single axis assumption behind the compilation barrier. Can you check the serialization/deserialization? |
854653e
to
4c484f3
Compare
dwave/optimization/model.pyi
Outdated
@@ -178,4 +178,4 @@ class ArraySymbol(Symbol): | |||
def state(self, index: int = 0, *, copy: bool = True) -> numpy.ndarray: ... | |||
def state_size(self) -> int: ... | |||
def strides(self) -> typing.Tuple[int, ...]: ... | |||
def sum(self) -> Sum: ... | |||
def sum(self, axis:typing.Optional[int] = None) -> typing.Union[Sum, PartialSum]: ... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def sum(self, axis:typing.Optional[int] = None) -> typing.Union[Sum, PartialSum]: ... | |
def sum(self, axis: typing.Optional[int] = None) -> typing.Union[Sum, PartialSum]: ... |
dwave/optimization/model.pyx
Outdated
from dwave.optimization.symbols import PartialSum # avoid circular import | ||
from dwave.optimization.symbols import Sum # avoid circular import | ||
|
||
if axis is not None: | ||
if not isinstance(axis, numbers.Integral): | ||
raise TypeError("axis of the sum should be an int") | ||
|
||
if not (0 <= axis < self.ndim()): | ||
raise ValueError("axis should be 0 <= axis < self.ndim()") | ||
|
||
return PartialSum(self, axis) | ||
|
||
return Sum(self) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from dwave.optimization.symbols import PartialSum # avoid circular import | |
from dwave.optimization.symbols import Sum # avoid circular import | |
if axis is not None: | |
if not isinstance(axis, numbers.Integral): | |
raise TypeError("axis of the sum should be an int") | |
if not (0 <= axis < self.ndim()): | |
raise ValueError("axis should be 0 <= axis < self.ndim()") | |
return PartialSum(self, axis) | |
return Sum(self) | |
if axis is not None: | |
if not isinstance(axis, numbers.Integral): | |
raise TypeError("axis of the sum should be an int") | |
if not (0 <= axis < self.ndim()): | |
raise ValueError("axis should be 0 <= axis < self.ndim()") | |
return dwave.optimization.symbols.PartialSum(self, axis) | |
return dwave.optimization.symbols.Sum(self) |
This pattern works too and doesn't make an unnecessary import.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not compile, doesn't know what dwave
is of course..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right. You can add import dwave.optimization.symbols
back at the top of the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but then we're importing more, aren't we?
This pattern works too and doesn't make an unnecessary import.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the unnecessary bit was adding two symbols to the namespace. The import happens either way.
This is all very much aesthetic stuff. I doubt the performance difference is even noticeable.
dwave/optimization/src/array.cpp
Outdated
ssize_t ndim = static_cast<ssize_t>(strides.size()); | ||
std::vector<ssize_t> indices; | ||
indices.reserve(ndim); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ssize_t ndim = static_cast<ssize_t>(strides.size()); | |
std::vector<ssize_t> indices; | |
indices.reserve(ndim); | |
std::vector<ssize_t> indices; | |
indices.reserve(strides.size()); |
looks like this is the only place ndim
is used. And this way we avoid casting to ssize_t
and then back to size_t
.
dwave/optimization/src/array.cpp
Outdated
std::vector<ssize_t> indices; | ||
indices.reserve(ndim); | ||
|
||
for (auto stride : strides) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for (auto stride : strides) { | |
for (const auto& stride : strides) { |
dwave/optimization/src/array.cpp
Outdated
|
||
ssize_t ravel_multi_index(const std::span<const ssize_t> strides, | ||
const std::span<const ssize_t> indices) { | ||
ssize_t ndim = static_cast<ssize_t>(strides.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ssize_t ndim = static_cast<ssize_t>(strides.size()); | |
const ssize_t ndim = static_cast<ssize_t>(strides.size()); |
std::vector<ssize_t> shape; | ||
shape.assign(input_shape.begin(), input_shape.end()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::vector<ssize_t> shape; | |
shape.assign(input_shape.begin(), input_shape.end()); | |
std::vector<ssize_t> shape(input_shape.begin(), input_shape.end()); |
std::vector<ssize_t> as_contiguous_strides(const std::span<const ssize_t> shape) { | ||
ssize_t ndim = static_cast<ssize_t>(shape.size()); | ||
|
||
assert(ndim >= 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert(ndim > 0);
right? because we access strides[ndim - 1]
shortly.
Or, if we want to handle the ndim == 0
case, we could do
assert(ndim >= 0);
if (ndim < 1) return {};
template <class BinaryOp> | ||
PartialReduceNode<BinaryOp>::PartialReduceNode(ArrayNode* node_ptr, std::span<const ssize_t> axes, | ||
double init) | ||
: ArrayOutputMixin(partial_reduce_shape(node_ptr->shape(), axes[0])), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will segfault with axes.size() < 1
.
Unfortunately std::span::at()
doesn't come in until C++26, otherwise that would be an elegant fix.
I think you'll need to add a
std::span<const ssize_t> nonempty(std::span<const ssize_t> span) {
if (span.empty()) throw ...;
return span;
}
function and then use it here
: ArrayOutputMixin(partial_reduce_shape(node_ptr->shape(), nonempty(axes)[0])),
: PartialReduceNode(array_ptr, axis, 0) {} | ||
|
||
template <class BinaryOp> | ||
PartialReduceNode<BinaryOp>::PartialReduceNode(ArrayNode* array_ptr, std::span<const ssize_t> axes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want this generic overload, right? It doesn't set up init
and we already cover the one case that does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so remove that also from the header?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the header file keeps it. But we only implement it for the relevant type overloads.
4c484f3
to
2867aa1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -23,6 +23,7 @@ | |||
#include <memory> | |||
#include <numeric> | |||
#include <optional> | |||
#include <ranges> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we use ranges
anywhere here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used to, not now removing it
9d42944
to
a9544ab
Compare
Added partial sum node that allows to do stuff like