Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decode_ind_ab #14

Open
whartley94 opened this issue Jun 16, 2020 · 1 comment
Open

decode_ind_ab #14

whartley94 opened this issue Jun 16, 2020 · 1 comment

Comments

@whartley94
Copy link

whartley94 commented Jun 16, 2020

in utils decode_ind_ab(), the calculations is

    data_a = data_q/opt.A
    data_b = data_q - data_a*opt.A
    data_ab = torch.cat((data_a, data_b), dim=1)

however I believe according to how the encoding was done we should have instead something like
data_a = (data_q - data_b)/opt.A

I'm imagining this would have to be solved using linear programming or something.
I was just wondering if this is something you're aware of and whether I am missing something?

My issue is that when I use decode_ind_ab currently all my b values come through as -1 as
with

    data_a = data_q/opt.A (eq1)
    data_b = data_q - data_a*opt.A (eq2)

we can sub eq1 into eq2 to show that

data_b = data_q - data_q = 0

which then gets scaled and shifted to -1 before being returned.

@whartley94
Copy link
Author

whartley94 commented Jun 17, 2020

This seems to work

def decode_ind_ab(data_q, opt):
    # Decode index into ab value
    # INPUTS
    #   data_q      Nx1xHxW \in [0,Q)
    # OUTPUTS
    #   data_ab     Nx2xHxW \in [-1,1]

    assert isinstance(opt.A, (int, float))
    data_b = torch.fmod(data_q, opt.A)
    data_a = (data_q-data_b)/opt.A

    data_ab = torch.cat((data_a, data_b), dim=1)

    if data_q.is_cuda:
        type_out = torch.cuda.FloatTensor
    else:
        type_out = torch.FloatTensor
    data_ab = ((data_ab.type(type_out)*opt.ab_quant) - opt.ab_max)/opt.ab_norm

    return data_ab

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant