mattertune.configs.data.datamodule

class mattertune.configs.data.datamodule.AutoSplitDataModuleConfig(*, batch_size, num_workers='auto', pin_memory=True, dataset, train_split, validation_split='auto', shuffle=True, shuffle_seed=42)[source]
Parameters:
  • batch_size (int)

  • num_workers (int | Literal['auto'])

  • pin_memory (bool)

  • dataset (DatasetConfig)

  • train_split (float)

  • validation_split (float | Literal['auto', 'disable'])

  • shuffle (bool)

  • shuffle_seed (int)

dataset: DatasetConfig

The configuration for the dataset.

train_split: float

The proportion of the dataset to include in the training split.

validation_split: float | Literal['auto', 'disable']

The proportion of the dataset to include in the validation split.

If set to “auto”, the validation split will be automatically determined as the complement of the training split, i.e. validation_split = 1 - train_split.

If set to “disable”, the validation split will be disabled.

shuffle: bool

Whether to shuffle the dataset before splitting.

shuffle_seed: int

The seed to use for shuffling the dataset.

dataset_configs()[source]
create_datasets()[source]
batch_size: int

The batch size for the dataloaders.

num_workers: int | Literal['auto']

The number of workers for the dataloaders.

This is the number of processes that generate batches in parallel.

If set to “auto”, the number of workers will be automatically set based on the number of available CPUs.

Set to 0 to disable parallelism.

pin_memory: bool

Whether to pin memory in the dataloaders.

This is useful for speeding up GPU data transfer.

class mattertune.configs.data.datamodule.DataModuleBaseConfig(*, batch_size, num_workers='auto', pin_memory=True)[source]
Parameters:
  • batch_size (int)

  • num_workers (int | Literal['auto'])

  • pin_memory (bool)

batch_size: int

The batch size for the dataloaders.

num_workers: int | Literal['auto']

The number of workers for the dataloaders.

This is the number of processes that generate batches in parallel.

If set to “auto”, the number of workers will be automatically set based on the number of available CPUs.

Set to 0 to disable parallelism.

pin_memory: bool

Whether to pin memory in the dataloaders.

This is useful for speeding up GPU data transfer.

dataloader_kwargs()[source]
Return type:

DataLoaderKwargs

abstract dataset_configs()[source]
Return type:

Iterable[DatasetConfig]

abstract create_datasets()[source]
Return type:

DatasetMapping

class mattertune.configs.data.datamodule.ManualSplitDataModuleConfig(*, batch_size, num_workers='auto', pin_memory=True, train, validation=None)[source]
Parameters:
  • batch_size (int)

  • num_workers (int | Literal['auto'])

  • pin_memory (bool)

  • train (DatasetConfig)

  • validation (DatasetConfig | None)

train: DatasetConfig

The configuration for the training data.

validation: DatasetConfig | None

The configuration for the validation data.

dataset_configs()[source]
create_datasets()[source]
batch_size: int

The batch size for the dataloaders.

num_workers: int | Literal['auto']

The number of workers for the dataloaders.

This is the number of processes that generate batches in parallel.

If set to “auto”, the number of workers will be automatically set based on the number of available CPUs.

Set to 0 to disable parallelism.

pin_memory: bool

Whether to pin memory in the dataloaders.

This is useful for speeding up GPU data transfer.