mattertune.data.datamodule
Classes
|
|
|
|
|
|
|
- class mattertune.data.datamodule.DatasetMapping[source]
- train: Dataset[Atoms]
- validation: Dataset[Atoms]
- class mattertune.data.datamodule.DataModuleBaseConfig(*, batch_size, num_workers='auto', pin_memory=True)[source]
- Parameters:
batch_size (int)
num_workers (int | Literal['auto'])
pin_memory (bool)
- batch_size: int
The batch size for the dataloaders.
- num_workers: int | Literal['auto']
The number of workers for the dataloaders.
This is the number of processes that generate batches in parallel.
If set to “auto”, the number of workers will be automatically set based on the number of available CPUs.
Set to 0 to disable parallelism.
- pin_memory: bool
Whether to pin memory in the dataloaders.
This is useful for speeding up GPU data transfer.
- class mattertune.data.datamodule.ManualSplitDataModuleConfig(*, batch_size, num_workers='auto', pin_memory=True, train, validation=None)[source]
- Parameters:
batch_size (int)
num_workers (int | Literal['auto'])
pin_memory (bool)
train (DatasetConfig)
validation (DatasetConfig | None)
- train: DatasetConfig
The configuration for the training data.
- validation: DatasetConfig | None
The configuration for the validation data.
- batch_size: int
The batch size for the dataloaders.
- num_workers: int | Literal['auto']
The number of workers for the dataloaders.
This is the number of processes that generate batches in parallel.
If set to “auto”, the number of workers will be automatically set based on the number of available CPUs.
Set to 0 to disable parallelism.
- pin_memory: bool
Whether to pin memory in the dataloaders.
This is useful for speeding up GPU data transfer.
- class mattertune.data.datamodule.AutoSplitDataModuleConfig(*, batch_size, num_workers='auto', pin_memory=True, dataset, train_split, validation_split='auto', shuffle=True, shuffle_seed=42)[source]
- Parameters:
batch_size (int)
num_workers (int | Literal['auto'])
pin_memory (bool)
dataset (DatasetConfig)
train_split (float)
validation_split (float | Literal['auto', 'disable'])
shuffle (bool)
shuffle_seed (int)
- dataset: DatasetConfig
The configuration for the dataset.
- train_split: float
The proportion of the dataset to include in the training split.
- validation_split: float | Literal['auto', 'disable']
The proportion of the dataset to include in the validation split.
If set to “auto”, the validation split will be automatically determined as the complement of the training split, i.e. validation_split = 1 - train_split.
If set to “disable”, the validation split will be disabled.
- shuffle: bool
Whether to shuffle the dataset before splitting.
- shuffle_seed: int
The seed to use for shuffling the dataset.
- batch_size: int
The batch size for the dataloaders.
- num_workers: int | Literal['auto']
The number of workers for the dataloaders.
This is the number of processes that generate batches in parallel.
If set to “auto”, the number of workers will be automatically set based on the number of available CPUs.
Set to 0 to disable parallelism.
- pin_memory: bool
Whether to pin memory in the dataloaders.
This is useful for speeding up GPU data transfer.
- class mattertune.data.datamodule.MatterTuneDataModule(hparams)[source]
- Parameters:
hparams (DataModuleConfig)
- __init__(hparams)[source]
- Parameters:
hparams (DataModuleConfig | Mapping[str, Any])
- prepare_data_per_node
If True, each LOCAL_RANK=0 will call prepare data. Otherwise only NODE_RANK=0, LOCAL_RANK=0 will prepare data.
- allow_zero_length_dataloader_with_multiple_devices
If True, dataloader with zero length within local rank is allowed. Default value is False.
- prepare_data()[source]
Use this to download and prepare data. Downloading and saving data with multiple processes (distributed settings) will result in corrupted data. Lightning ensures this method is called only within a single process, so you can safely add your downloading logic within.
Warning
DO NOT set state to the model (use
setup
instead) since this is NOT called on every deviceExample:
def prepare_data(self): # good download_data() tokenize() etc() # bad self.split = data_split self.some_state = some_other_state()
In a distributed environment,
prepare_data
can be called in two ways (using prepare_data_per_node)Once per node. This is the default and is only called on LOCAL_RANK=0.
Once in total. Only called on GLOBAL_RANK=0.
Example:
# DEFAULT # called once per node on LOCAL_RANK=0 of that node class LitDataModule(LightningDataModule): def __init__(self): super().__init__() self.prepare_data_per_node = True # call on GLOBAL_RANK=0 (great for shared file systems) class LitDataModule(LightningDataModule): def __init__(self): super().__init__() self.prepare_data_per_node = False
This is called before requesting the dataloaders:
model.prepare_data() initialize_distributed() model.setup(stage) model.train_dataloader() model.val_dataloader() model.test_dataloader() model.predict_dataloader()
- Return type:
None
- setup(stage)[source]
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage (str) – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- property lightning_module
- train_dataloader()[source]
An iterable or collection of iterables specifying training samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
For data processing use the following pattern:
download in
prepare_data()
process and split in
setup()
However, the above are only necessary for distributed processing.
Warning
do not assign state in prepare_data
fit()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.
- val_dataloader()[source]
An iterable or collection of iterables specifying validation samples.
For more information about multiple dataloaders, see this section.
The dataloader you return will not be reloaded unless you set :paramref:`~lightning.pytorch.trainer.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.fit()
validate()
Note
Lightning tries to add the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
Note
If you don’t need a validation dataset and a
validation_step()
, you don’t need to implement this method.