捷徑

ConvTranspose3d

class torch.ao.nn.quantized.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)[source][source]

對由數個輸入平面組成的輸入影像套用 3D 轉置卷積運算。關於輸入引數、參數和實作的詳細資訊,請參閱 ConvTranspose3d

注意

目前僅實作 FBGEMM 引擎。請設定 torch.backends.quantized.engine = ‘fbgemm’

關於特殊注意事項,請參閱 Conv3d

變數
  • weight (Tensor) – 從可學習權重參數衍生的封裝張量。

  • scale (Tensor) – 輸出比例的純量

  • zero_point (Tensor) – 輸出零點的純量

關於其他屬性,請參閱 ConvTranspose3d

範例

>>> torch.backends.quantized.engine = 'fbgemm'
>>> from torch.ao.nn import quantized as nnq
>>> # With cubic kernels and equal stride
>>> m = nnq.ConvTranspose3d(16, 33, 3, stride=2)
>>> # non-cubic kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose3d(16, 33, (3, 3, 5), stride=(2, 1, 1), padding=(4, 2, 2))
>>> input = torch.randn(20, 16, 50, 100, 100)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12, 12, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv3d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose3d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12, 12, 12])

文件

存取 PyTorch 的完整開發人員文件

檢視文件

教學

取得適合初學者和進階開發人員的深入教學課程

檢視教學

資源

尋找開發資源並取得您問題的解答

檢視資源