matchzoo.modules.attention
¶
Attention module.
Module Contents¶
-
class
matchzoo.modules.attention.
Attention
(input_size:int=100, mask:int=0)¶ Bases:
torch.nn.Module
Attention module.
Parameters: - input_size – Size of input.
- mask – An integer to mask the invalid values. Defaults to 0.
Examples
>>> import torch >>> attention = Attention(input_size=10) >>> x = torch.randn(4, 5, 10) >>> x.shape torch.Size([4, 5, 10]) >>> attention(x).shape torch.Size([4, 5])
-
forward
(self, x)¶ Perform attention on the input.
-
class
matchzoo.modules.attention.
BidirectionalAttention
¶ Bases:
torch.nn.Module
Computing the soft attention between two sequence.
-
forward
(self, v1, v1_mask, v2, v2_mask)¶ Forward.
-
-
class
matchzoo.modules.attention.
MatchModule
(hidden_size, dropout_rate=0)¶ Bases:
torch.nn.Module
Computing the match representation for Match LSTM.
Parameters: - hidden_size – Size of hidden vectors.
- dropout_rate – Dropout rate of the projection layer. Defaults to 0.
Examples
>>> import torch >>> attention = MatchModule(hidden_size=10) >>> v1 = torch.randn(4, 5, 10) >>> v1.shape torch.Size([4, 5, 10]) >>> v2 = torch.randn(4, 5, 10) >>> v2_mask = torch.ones(4, 5).to(dtype=torch.uint8) >>> attention(v1, v2, v2_mask).shape torch.Size([4, 5, 20])
-
forward
(self, v1, v2, v2_mask)¶ Computing attention vectors and projection vectors.