You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great work. I have some questions about your model. Transformer architecture is used twice in your model for different tasks, including corner prediction and edge prediction. I would like to know if the Transformer used in the model includes both an encoder and a decoder. What is the number of encoder and decoder layers? Additionally, how long does it take to train the model with the Building3D dataset?
The text was updated successfully, but these errors were encountered:
Besides, I think the edge prediction might be a highly imbalanced task. I would like to ask if the weight CE is used, or if a specifically designed loss function is employed. Thank you again for your work!
Thank you for your great work. I have some questions about your model. Transformer architecture is used twice in your model for different tasks, including corner prediction and edge prediction. I would like to know if the Transformer used in the model includes both an encoder and a decoder. What is the number of encoder and decoder layers? Additionally, how long does it take to train the model with the Building3D dataset?
The text was updated successfully, but these errors were encountered: