You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/opt/conda/lib/python3.7/site-packages/blitz/utils/variational_estimator.py in sample_elbo(self, inputs, labels, criterion, sample_nbr, complexity_cost_weight)
63 loss = 0
64 for _ in range(sample_nbr):
---> 65 outputs = self(inputs)
66 loss += criterion(outputs, labels)
67 loss += self.nn_kl_divergence() * complexity_cost_weight
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
in forward(self, encodings, hidden_states, sharpen_loss)
17
18 #pass the inputs to the model
---> 19 x,t = self.bGRU1(encodings,hidden_states = None, sharpen_loss = None)
20 x,t = self.bGRU2(x,t)
21 return x
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
You can add a new axis by using None as selector for your tensor/array.
Like this you can "prepend" an extra dimension to 2-D data in numpy and torch:
# datapoints have shape [4, 768]datapoints=datapoints[None, :, :]
# datapoints now have shape [1, 4, 768]
By swapping None with :, you could also reshape to [4, 1, 768] or [4, 768, 1].
If you need to swap 4 and 768 for whatever reason, have a look at .transpose()
I get the following error for Bayesian GRU implementation.
ValueError Traceback (most recent call last)
in
9 labels=labels.to(device),
10 criterion=criterion,
---> 11 sample_nbr=3)
12 loss.backward()
13 optimizer.step()
/opt/conda/lib/python3.7/site-packages/blitz/utils/variational_estimator.py in sample_elbo(self, inputs, labels, criterion, sample_nbr, complexity_cost_weight)
63 loss = 0
64 for _ in range(sample_nbr):
---> 65 outputs = self(inputs)
66 loss += criterion(outputs, labels)
67 loss += self.nn_kl_divergence() * complexity_cost_weight
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
in forward(self, encodings, hidden_states, sharpen_loss)
17
18 #pass the inputs to the model
---> 19 x,t = self.bGRU1(encodings,hidden_states = None, sharpen_loss = None)
20 x,t = self.bGRU2(x,t)
21 return x
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/blitz/modules/gru_bayesian_layer.py in forward(self, x, hidden_states, sharpen_loss)
221 sharpen_loss = None
222
--> 223 return self.forward_(x, hidden_states, sharpen_loss)
224
/opt/conda/lib/python3.7/site-packages/blitz/modules/gru_bayesian_layer.py in forward_(self, x, hidden_states, sharpen_loss)
135
136 #Assumes x is of shape (batch, sequence, feature)
--> 137 bs, seq_sz, _ = x.size()
138 hidden_seq = []
139
ValueError: not enough values to unpack (expected 3, got 2)
For this line of code:
for i, (datapoints, labels) in enumerate(train_dataloader):
optimizer.zero_grad()
My datapoints have shape torch.Size([4, 768]). How am I expected to reshape it? Please advise. Thanks.
The text was updated successfully, but these errors were encountered: