Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix sampling algorithm #96

Merged
merged 14 commits into from
Jul 30, 2024
Merged

Fix sampling algorithm #96

merged 14 commits into from
Jul 30, 2024

Conversation

GiggleLiu
Copy link
Member

Now the algorithm handles the case with marginal variables.

@GiggleLiu GiggleLiu requested a review from mroavi July 28, 2024 16:16
@mroavi
Copy link
Collaborator

mroavi commented Jul 28, 2024

Not all tests are passing when running the test suit in my machine. The error doesn't seem to be related to the sampling algorithm though.

Error output
gradient-based tensor network solvers: Error During Test at /home/mroavi/.julia/dev/TensorInference/test/cuda.jl:6
  Got exception outside of a @test
  This object is not a GPU array
  Stacktrace:
    [1] error(s::String)
      @ Base ./error.jl:35
    [2] backend(::Type)
      @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:225
    [3] backend(x::Array{Float64, 0})
      @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:226
    [4] _copyto!
      @ ~/.julia/packages/GPUArrays/8Y80U/src/host/broadcast.jl:78 [inlined]
    [5] materialize!
      @ ~/.julia/packages/GPUArrays/8Y80U/src/host/broadcast.jl:38 [inlined]
    [6] materialize!
      @ ./broadcast.jl:911 [inlined]
    [7] macro expansion
      @ ~/.julia/packages/OMEinsum/pPXqJ/src/utils.jl:29 [inlined]
    [8] binary_einsum!(::OMEinsum.SimpleBinaryRule{(), (), ()}, x1::Array{Float64, 0}, x2::CuArray{Float64, 0, CUDA.DeviceMemory}, y::Array{Float64, 0}, sx::Bool, sy::Bool)
      @ OMEinsum ~/.julia/packages/OMEinsum/pPXqJ/src/binaryrules.jl:19
    [9] einsum!(ixs::Vector{Vector{Int64}}, iy::Vector{Int64}, xs::Tuple{Any, Any}, y::Any, sx::Bool, sy::Bool, size_dict::Dict{Int64, Int64})
      @ OMEinsum ~/.julia/packages/OMEinsum/pPXqJ/src/einsum.jl:115
   [10] einsum!
      @ OMEinsum ~/.julia/packages/OMEinsum/pPXqJ/src/einsum.jl:38 [inlined]
   [11] einsum(code::DynamicEinCode{Int64}, xs::Tuple, size_dict::Dict{Int64, Int64})
      @ OMEinsum ~/.julia/packages/OMEinsum/pPXqJ/src/einsum.jl:33
   [12] einsum(code::DynamicEinCode{Int64}, xs::Tuple{RescaledArray{Float64, 0, Array{Float64, 0}}, RescaledArray{Float64, 0, CuArray{Float64, 0, CUDA.DeviceMemory}}}, size_dict::Dict{Int64, Int64})
      @ TensorInference ~/.julia/dev/TensorInference/src/RescaledArray.jl:39
   [13] einsum_grad(ixs::Vector{Vector{Int64}}, xs::Any, iy::Vector{Int64}, size_dict::Dict{Int64, Int64}, cdy::RescaledArray{Float64, 0, Array{Float64, 0}}, i::Int64)
      @ OMEinsum ~/.julia/packages/OMEinsum/pPXqJ/src/autodiff.jl:28
   [14] #183
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:82 [inlined]
   [15] ntuple
      @ Base ./ntuple.jl:19 [inlined]
   [16] einsum_backward_rule(eins::DynamicEinCode{Int64}, xs::Tuple{RescaledArray{Float64, 0, CuArray{Float64, 0, CUDA.DeviceMemory}}, RescaledArray{Float64, 0, CuArray{Float64, 0, CUDA.DeviceMemory}}}, y::RescaledArray{Float64, 0, CuArray{Float64, 0, CUDA.DeviceMemory}}, size_dict::Dict{Int64, Int64}, dy::RescaledArray{Float64, 0, Array{Float64, 0}})
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:82
   [17] generate_gradient_tree(code::DynamicNestedEinsum{Int64}, cache::TensorInference.CacheTree{Float64}, dy::RescaledArray{Float64, 0, Array{Float64, 0}}, size_dict::Dict{Int64, Int64})
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:75
   [18] generate_gradient_tree(se::SlicedEinsum{Int64, DynamicNestedEinsum{Int64}}, cache::TensorInference.CacheTree{Float64}, dy::RescaledArray{Float64, 0, Array{Float64, 0}}, size_dict::Dict{Int64, Int64})
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:56
   [19] gradient_tree(code::SlicedEinsum{Int64, DynamicNestedEinsum{Int64}}, xs::Vector{RescaledArray{Float64, N, AT} where {N, AT<:AbstractArray{Float64, N}}})
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:94
   [20] cost_and_gradient(code::SlicedEinsum{Int64, DynamicNestedEinsum{Int64}}, xs::Vector{RescaledArray{Float64, N, AT} where {N, AT<:AbstractArray{Float64, N}}})
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:99
   [21] marginals(tn::TensorNetworkModel{Int64, SlicedEinsum{Int64, DynamicNestedEinsum{Int64}}, Array{Float64}}; usecuda::Bool, rescale::Bool)
      @ TensorInference ~/.julia/dev/TensorInference/src/mar.jl:193
   [22] macro expansion
      @ ./timing.jl:279 [inlined]
   [23] macro expansion
      @ ~/.julia/dev/TensorInference/test/cuda.jl:13 [inlined]
   [24] macro expansion
      @ ~/packages/julias/julia-1.10.0/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
   [25] top-level scope
      @ ~/.julia/dev/TensorInference/test/cuda.jl:7
   [26] include(fname::String)
      @ Base.MainInclude ./client.jl:489
   [27] top-level scope
      @ ~/.julia/dev/TensorInference/test/runtests.jl:29
   [28] include(fname::String)
      @ Base.MainInclude ./client.jl:489
   [29] top-level scope
      @ none:6
   [30] eval
      @ Core ./boot.jl:385 [inlined]
   [31] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:291
   [32] _start()
      @ Base ./client.jl:552
Test Summary:                         | Error  Total     Time
gradient-based tensor network solvers |     1      1  1m13.3s
ERROR: LoadError: Some tests did not pass: 0 passed, 0 failed, 1 errored, 0 broken.
in expression starting at /home/mroavi/.julia/dev/TensorInference/test/cuda.jl:6
in expression starting at /home/mroavi/.julia/dev/TensorInference/test/runtests.jl:28

@mroavi
Copy link
Collaborator

mroavi commented Jul 28, 2024

and I'm getting the following error when trying to resolve the package versions:

(TensorInference) pkg> resolve
ERROR: empty intersection between GenericTensorNetworks@1.4.1 and project compatibility 2

@GiggleLiu
Copy link
Member Author

GiggleLiu commented Jul 29, 2024

Wield, have you used a different registry? and have you tried updating the packages? The CI passed.

@mroavi
Copy link
Collaborator

mroavi commented Jul 29, 2024

Wield, have you used a different registry? and have you tried updating the packages? The CI passed.

I have not used a different registry. I updated the packages but I'm still getting these error:

  Got exception outside of a @test
  This object is not a GPU array

Do the tests pass in your machine?

@GiggleLiu
Copy link
Member Author

I just fixed the bug, it is about incorrectly converting a target array to a rescaled array on GPU. Thanks for checking the CUDA test!

@mroavi
Copy link
Collaborator

mroavi commented Jul 30, 2024

I just fixed the bug, it is about incorrectly converting a target array to a rescaled array on GPU. Thanks for checking the CUDA test!

All tests pass now! Awesome work.

@mroavi mroavi merged commit 849a28b into main Jul 30, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants