How can this CNN be detifined in Burn? #2800
Unanswered
AutumnRoom
asked this question in
Q&A
Replies: 1 comment 8 replies
-
#[derive(Module, Debug, Clone)]
pub enum Activation {
Relu,
// etc.
}
#[derive(Module, Debug)]
pub struct ConvBlock<B: Backend> {
pub conv: Conv1d<B>,
pub norm: LayerNorm<B>,
pub activation: Activation,
pub dropout: Dropout,
}
impl<B: Backend> ConvBlock<B> {
pub fn forward(&self, x: Tensor<B, 3>) -> Tensor<B, 3> {
let x = self.norm.forward(self.conv.forward(x));
let x = match self.activation {
Activation::Relu => burn::tensor::activation::relu(x),
};
self.dropout.forward(x)
}
} so your cnn would simply be a |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Pytorch:
self.cnn = nn.ModuleList()
for _ in range(depth):
self.cnn.append(nn.Sequential(
weight_norm(nn.Conv1d(channels, channels, kernel_size=kernel_size, padding=padding)),
LayerNorm(channels),
actv,
nn.Dropout(0.2),
))
here are some confusion,
1.weight_norm()
2.Sequential() and ModuleList
3.for _ in range(): //Maybe
Beta Was this translation helpful? Give feedback.
All reactions