Replies: 1 comment
-
You can create a import numpy as np
import arrayfire as af
if af.get_active_backend() != "cuda":
af.set_backend("cuda")
import pycuda.gpuarray as garray
af_arr = af.to_array(np.ones(50, dtype="f"))
pycuda_arr = garray.empty(af_arr.shape, np.float32, gpudata=af_arr.device_ptr())
assert pycuda_arr.ptr == af_arr.device_ptr()
pycuda_arr += 1
print(af_arr.to_ndarray()) Beware, from my (admittedly very limited) experience with arrayfire, it seems that arrayfire almost-always re-allocates memory (even for in-place operations like |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This is a similar question as outlined here: #347 (comment)
I have an Arrayfire array that allows to output the device pointer as
int
. Is it possible to somehow use this device pointer in a kernel call? Or make a GPUArray from it? In PyOpenCL this can be achieved withcl.MemoryObject.from_int_ptr
, but is there an equivalent function in PyCUDA?Beta Was this translation helpful? Give feedback.
All reactions