matchzoo.dataloader
¶
Subpackages¶
Submodules¶
Package Contents¶
Classes¶
Dataset that is built from a data pack. |
|
DataLoader that loads batches of data from a Dataset. |
|
DataLoader Bulider. In essense a wrapped partial function. |
|
Dataset Bulider. In essense a wrapped partial function. |
-
class
matchzoo.dataloader.
Dataset
(data_pack: mz.DataPack, mode='point', num_dup: int = 1, num_neg: int = 1, batch_size: int = 32, resample: bool = False, shuffle: bool = True, sort: bool = False, callbacks: typing.List[BaseCallback] = None)¶ Bases:
torch.utils.data.IterableDataset
Dataset that is built from a data pack.
- Parameters
data_pack – DataPack to build the dataset.
mode – One of “point”, “pair”, and “list”. (default: “point”)
num_dup – Number of duplications per instance, only effective when mode is “pair”. (default: 1)
num_neg – Number of negative samples per instance, only effective when mode is “pair”. (default: 1)
batch_size – Batch size. (default: 32)
resample – Either to resample for each epoch, only effective when mode is “pair”. (default: True)
shuffle – Either to shuffle the samples/instances. (default: True)
sort – Whether to sort data according to length_right. (default: False)
callbacks – Callbacks. See matchzoo.dataloader.callbacks for more details.
Examples
>>> import matchzoo as mz >>> data_pack = mz.datasets.toy.load_data(stage='train') >>> preprocessor = mz.preprocessors.BasicPreprocessor() >>> data_processed = preprocessor.fit_transform(data_pack) >>> dataset_point = mz.dataloader.Dataset( ... data_processed, mode='point', batch_size=32) >>> len(dataset_point) 4 >>> dataset_pair = mz.dataloader.Dataset( ... data_processed, mode='pair', num_dup=2, num_neg=2, batch_size=32) >>> len(dataset_pair) 1
-
__getitem__
(self, item) → typing.Tuple[dict, np.ndarray]¶ Get a batch from index idx.
- Parameters
item – the index of the batch.
-
__len__
(self) → int¶ Get the total number of batches.
-
__iter__
(self)¶ Create a generator that iterate over the Batches.
-
on_epoch_end
(self)¶ Reorganize the index array if needed.
-
resample_data
(self)¶ Reorganize data.
-
reset_index
(self)¶ Set the
_batch_indices
.Here the
_batch_indices
records the index of all the instances.
-
_handle_callbacks_on_batch_data_pack
(self, batch_data_pack)¶
-
_handle_callbacks_on_batch_unpacked
(self, x, y)¶
-
property
callbacks
(self)¶ callbacks getter.
-
property
num_neg
(self)¶ num_neg getter.
-
property
num_dup
(self)¶ num_dup getter.
-
property
mode
(self)¶ mode getter.
-
property
batch_size
(self)¶ batch_size getter.
-
property
shuffle
(self)¶ shuffle getter.
-
property
sort
(self)¶ sort getter.
-
property
resample
(self)¶ resample getter.
-
property
batch_indices
(self)¶ batch_indices getter.
-
classmethod
_reorganize_pair_wise
(cls, relation: pd.DataFrame, num_dup: int = 1, num_neg: int = 1)¶ Re-organize the data pack as pair-wise format.
-
class
matchzoo.dataloader.
DataLoader
(dataset: Dataset, device: typing.Union[torch.device, int, list, None] = None, stage='train', callback: BaseCallback = None, pin_memory: bool = False, timeout: int = 0, num_workers: int = 0, worker_init_fn=None)¶ Bases:
object
DataLoader that loads batches of data from a Dataset.
- Parameters
dataset – The Dataset object to load data from.
device – The desired device of returned tensor. Default: if None, use the current device. If torch.device or int, use device specified by user. If list, the first item will be used.
stage – One of “train”, “dev”, and “test”. (default: “train”)
callback – BaseCallback. See matchzoo.engine.base_callback.BaseCallback for more details.
pin_momory – If set to True, tensors will be copied into pinned memory. (default: False)
timeout – The timeout value for collecting a batch from workers. ( default: 0)
num_workers – The number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)
worker_init_fn – If not
None
, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)
Examples
>>> import matchzoo as mz >>> data_pack = mz.datasets.toy.load_data(stage='train') >>> preprocessor = mz.preprocessors.BasicPreprocessor() >>> data_processed = preprocessor.fit_transform(data_pack) >>> dataset = mz.dataloader.Dataset( ... data_processed, mode='point', batch_size=32) >>> padding_callback = mz.dataloader.callbacks.BasicPadding() >>> dataloader = mz.dataloader.DataLoader( ... dataset, stage='train', callback=padding_callback) >>> len(dataloader) 4
-
__len__
(self) → int¶ Get the total number of batches.
-
property
id_left
(self) → np.ndarray¶ id_left getter.
-
property
label
(self) → np.ndarray¶ label getter.
-
__iter__
(self) → typing.Tuple[dict, torch.tensor]¶ Iteration.
-
_handle_callbacks_on_batch_unpacked
(self, x, y)¶
-
class
matchzoo.dataloader.
DataLoaderBuilder
(**kwargs)¶ Bases:
object
DataLoader Bulider. In essense a wrapped partial function.
Example
>>> import matchzoo as mz >>> padding_callback = mz.dataloader.callbacks.BasicPadding() >>> builder = mz.dataloader.DataLoaderBuilder( ... stage='train', callback=padding_callback ... ) >>> data_pack = mz.datasets.toy.load_data() >>> preprocessor = mz.preprocessors.BasicPreprocessor() >>> data_processed = preprocessor.fit_transform(data_pack) >>> dataset = mz.dataloader.Dataset(data_processed, mode='point') >>> dataloder = builder.build(dataset) >>> type(dataloder) <class 'matchzoo.dataloader.dataloader.DataLoader'>
-
build
(self, dataset, **kwargs) → DataLoader¶ Build a DataLoader.
- Parameters
dataset – Dataset to build upon.
kwargs – Additional keyword arguments to override the keyword arguments passed in __init__.
-
-
class
matchzoo.dataloader.
DatasetBuilder
(**kwargs)¶ Bases:
object
Dataset Bulider. In essense a wrapped partial function.
Example
>>> import matchzoo as mz >>> builder = mz.dataloader.DatasetBuilder( ... mode='point' ... ) >>> data = mz.datasets.toy.load_data() >>> gen = builder.build(data) >>> type(gen) <class 'matchzoo.dataloader.dataset.Dataset'>