Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
After an embarrassingly long delay, I finally managed to allocate time to implement the modifications discussed in #17. :)
The initial goal was to make it possible to use AbstractOperators with CUDA.jl or other GPU packages, and the main obstacle was that operators and their combinations allocated buffers on the CPU. With this modification, it will be possible to override the two new functions,
domainStorageType
andcodomainStorageType
, and these functions determine the type of buffers/outputs. That way, one can implement CPU ↔ GPU and GPU ↔ GPU operators, or even CPU ↔ CPU operators that operate on AbstractArrays other than Array (e.g. NamedDimsArray).The default implementation of
domainStorageType
/codomainStorageType
for AbstractOperators returns Array or ArrayPartition if the domain/codomain size of the operator is a tuple of tuples; therefore, no breaking changes are needed, and all test completes without modifications.