You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a problem with running the tetris example. Both
python main.py +experiments=simple_tetris n_samples=10
or python main.py env=tetris n_samples=10
give me an error with missing alpha parameter in kwargs.
Here is the full traceback
Traceback (most recent call last):
File "/home/mirekl/Tmp_Repositories/gflownet/main.py", line 102, in main
gflownet.train()
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/gflownet.py", line 1144, in train
losses = self.trajectorybalance_loss(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/gflownet.py", line 790, in trajectorybalance_loss
logrewards = batch.get_terminating_rewards(log=True, sort_by="trajectory")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/utils/batch.py", line 1210, in get_terminating_rewards
self._compute_rewards(log, do_non_terminating=False)
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/utils/batch.py", line 1003, in _compute_rewards
rewards[done], proxy_values[done] = self.proxy.rewards(
^^^^^^^^^^^^^^^^^^^
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/proxy/base.py", line 140, in rewards
rewards = self.proxy2logreward(proxy_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/proxy/base.py", line 185, in proxy2logreward
logrewards = self._logreward_function(proxy_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/proxy/base.py", line 338, in <lambda>
kwargs["alpha"]
~~~~~~^^^^^^^^^
KeyError: 'alpha'
Another error is if i change cpu device to cuda.
Traceback (most recent call last):
File "/home/mirekl/Tmp_Repositories/gflownet/main.py", line 102, in main
gflownet.train()
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/gflownet.py", line 1120, in train
self.evaluator.eval_and_log(it)
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/evaluator/abstract.py", line 654, in eval_and_log
results = self.eval(metrics=metrics)
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/evaluator/base.py", line 501, in eval
lp_results = self.compute_log_prob_metrics(x_tt, metrics=metrics)
File "/home/mirekl/Tmp_Repositories/gflownet/gflownet/evaluator/base.py", line 280, in compute_log_prob_metrics
lp_metrics["corr_prob_traj_rewards"] = np.corrcoef(
File "/home/mirekl/envs/gflownet/lib/python3.10/site-packages/numpy/lib/function_base.py", line 2889, in corrcoef
c = cov(x, y, rowvar, dtype=dtype)
File "/home/mirekl/envs/gflownet/lib/python3.10/site-packages/numpy/lib/function_base.py", line 2664, in cov
y = np.asarray(y)
File "/home/mirekl/envs/gflownet/lib/python3.10/site-packages/torch/_tensor.py", line 970, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
I am also a bit confused: how is it with the speed difference in cpu vs cuda? it is really impossible (or not faster) to train the neural network on GPU?
Anyway: Tetris environment doesn't work for me. Other environments (such as the grid example in Readme) do work. Do you have a working config for tetris environment? or does the default work for you? and do i need to change any parameters based on device (cpu vs cuda)?
The text was updated successfully, but these errors were encountered:
I have a problem with running the tetris example. Both
python main.py +experiments=simple_tetris n_samples=10
or
python main.py env=tetris n_samples=10
give me an error with missing alpha parameter in kwargs.
Here is the full traceback
Another error is if i change
cpu
device tocuda
.I am also a bit confused: how is it with the speed difference in cpu vs cuda? it is really impossible (or not faster) to train the neural network on GPU?
Anyway: Tetris environment doesn't work for me. Other environments (such as the grid example in Readme) do work. Do you have a working config for tetris environment? or does the default work for you? and do i need to change any parameters based on device (cpu vs cuda)?
The text was updated successfully, but these errors were encountered: