ConvTranspose1d¶
- class torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)[source][source]¶
在由多個輸入平面組成的輸入圖像上應用 1D 轉置卷積運算符。 有關輸入參數、參數和實作的詳細資訊,請參閱
ConvTranspose1d
。注意
目前僅實作 QNNPACK 引擎。 請設定 torch.backends.quantized.engine = ‘qnnpack’
有關特殊注意事項,請參閱
Conv1d
有關其他屬性,請參閱
ConvTranspose2d
。範例
>>> torch.backends.quantized.engine = 'qnnpack' >>> from torch.ao.nn import quantized as nnq >>> # With square kernels and equal stride >>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding >>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> input = torch.randn(20, 16, 50) >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) >>> # exact output size can be also specified as an argument >>> input = torch.randn(1, 16, 12) >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1) >>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1) >>> h = downsample(q_input) >>> h.size() torch.Size([1, 16, 6]) >>> output = upsample(h, output_size=input.size()) >>> output.size() torch.Size([1, 16, 12])