matchzoo.modules
¶
Submodules¶
Package Contents¶
-
class
matchzoo.modules.
Attention
(input_size:int=100, mask:int=0)¶ Bases:
torch.nn.Module
Attention module.
Parameters: - input_size – Size of input.
- mask – An integer to mask the invalid values. Defaults to 0.
Examples
>>> import torch >>> attention = Attention(input_size=10) >>> x = torch.randn(4, 5, 10) >>> x.shape torch.Size([4, 5, 10]) >>> attention(x).shape torch.Size([4, 5])
-
forward
(self, x)¶ Perform attention on the input.
-
class
matchzoo.modules.
BidirectionalAttention
¶ Bases:
torch.nn.Module
Computing the soft attention between two sequence.
-
forward
(self, v1, v1_mask, v2, v2_mask)¶ Forward.
-
-
class
matchzoo.modules.
MatchModule
(hidden_size, dropout_rate=0)¶ Bases:
torch.nn.Module
Computing the match representation for Match LSTM.
Parameters: - hidden_size – Size of hidden vectors.
- dropout_rate – Dropout rate of the projection layer. Defaults to 0.
Examples
>>> import torch >>> attention = MatchModule(hidden_size=10) >>> v1 = torch.randn(4, 5, 10) >>> v1.shape torch.Size([4, 5, 10]) >>> v2 = torch.randn(4, 5, 10) >>> v2_mask = torch.ones(4, 5).to(dtype=torch.uint8) >>> attention(v1, v2, v2_mask).shape torch.Size([4, 5, 20])
-
forward
(self, v1, v2, v2_mask)¶ Computing attention vectors and projection vectors.
-
class
matchzoo.modules.
RNNDropout
¶ Bases:
torch.nn.Dropout
Dropout for RNN.
-
forward
(self, sequences_batch)¶ Masking whole hidden vector for tokens.
-
-
class
matchzoo.modules.
StackedBRNN
(input_size, hidden_size, num_layers, dropout_rate=0, dropout_output=False, rnn_type=nn.LSTM, concat_layers=False)¶ Bases:
torch.nn.Module
Stacked Bi-directional RNNs.
Differs from standard PyTorch library in that it has the option to save and concat the hidden states between layers. (i.e. the output hidden size for each sequence input is num_layers * hidden_size).
Examples
>>> import torch >>> rnn = StackedBRNN( ... input_size=10, ... hidden_size=10, ... num_layers=2, ... dropout_rate=0.2, ... dropout_output=True, ... concat_layers=False ... ) >>> x = torch.randn(2, 5, 10) >>> x.size() torch.Size([2, 5, 10]) >>> x_mask = (torch.ones(2, 5) == 1) >>> rnn(x, x_mask).shape torch.Size([2, 5, 20])
-
forward
(self, x, x_mask)¶ Encode either padded or non-padded sequences.
-
_forward_unpadded
(self, x, x_mask)¶ Faster encoding that ignores any padding.
-
-
class
matchzoo.modules.
GaussianKernel
(mu:float=1.0, sigma:float=1.0)¶ Bases:
torch.nn.Module
Gaussian kernel module.
Parameters: - mu – Float, mean of the kernel.
- sigma – Float, sigma of the kernel.
Examples
>>> import torch >>> kernel = GaussianKernel() >>> x = torch.randn(4, 5, 10) >>> x.shape torch.Size([4, 5, 10]) >>> kernel(x).shape torch.Size([4, 5, 10])
-
forward
(self, x)¶ Forward.
-
class
matchzoo.modules.
Matching
(normalize:bool=False, matching_type:str='dot')¶ Bases:
torch.nn.Module
Module that computes a matching matrix between samples in two tensors.
Parameters: - normalize – Whether to L2-normalize samples along the dot product axis before taking the dot product. If set to True, then the output of the dot product is the cosine proximity between the two samples.
- matching_type – the similarity function for matching
Examples
>>> import torch >>> matching = Matching(matching_type='dot', normalize=True) >>> x = torch.randn(2, 3, 2) >>> y = torch.randn(2, 4, 2) >>> matching(x, y).shape torch.Size([2, 3, 4])
-
classmethod
_validate_matching_type
(cls, matching_type:str='dot')¶
-
forward
(self, x, y)¶ Perform attention on the input.
-
class
matchzoo.modules.
BertModule
(mode:str='bert-base-uncased')¶ Bases:
torch.nn.Module
Bert module.
BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
Parameters: mode – String, supported mode can be referred https://huggingface.co/pytorch-transformers/pretrained_models.html. -
forward
(self, x, y)¶ Forward.
-