You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am reading the code. And I am wondering what's the mean of the function----fill_up_weights.
The code is here.
It seems that use the fill_up_weight to init the parameter of ConvTranspose2d. However, the parameter of ConvTranspose2d maybe not update in the training process. So why do freeze the weight of ConvTranspose2d?
The text was updated successfully, but these errors were encountered:
The code is for bilinear upsampling. In the early versions of pytorch, upsampling is aligned to the corners, which will actually cause pixel misalignment. Therefore, I made my own version of bilinear upsampling by using "fill_up_weights" and "ConvTranspose2d". It has been resolved in the current pytorch. PyTorch documentation also has some explanation: https://pytorch.org/docs/stable/nn.html?highlight=bilinear#torch.nn.Upsample.
So, the weight which named as up.weight is useless? In the inference state, could be the ConvTranspose2d operation replaced with the resize operation with the bilinear kernel?
Hi, I am reading the code. And I am wondering what's the mean of the function----fill_up_weights.
The code is here.
It seems that use the fill_up_weight to init the parameter of ConvTranspose2d. However, the parameter of ConvTranspose2d maybe not update in the training process. So why do freeze the weight of ConvTranspose2d?
The text was updated successfully, but these errors were encountered: