mattertune.configs.finetune.base

class mattertune.configs.finetune.base.FinetuneModuleBaseConfig(*, reset_backbone=False, freeze_backbone=False, reset_output_heads=True, use_pretrained_normalizers=False, output_internal_features=False, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={})[source]
Parameters:
  • reset_backbone (bool)

  • freeze_backbone (bool)

  • reset_output_heads (bool)

  • use_pretrained_normalizers (bool)

  • output_internal_features (bool)

  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

reset_backbone: bool

Whether to reset the backbone of the model when creating the model.

freeze_backbone: bool

Whether to freeze the backbone during training.

reset_output_heads: bool

Whether to reset the output heads of the model when creating the model.

use_pretrained_normalizers: bool

Whether to use the pretrained normalizers.

output_internal_features: bool

If set to True, the model will output the internal features of the backbone model instead of the predicted properties.

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

abstract classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

abstract create_model()[source]

Creates an instance of the finetune module for this configuration.

Return type:

FinetuneModuleBase

class mattertune.configs.finetune.base.ReduceOnPlateauConfig(*, type='ReduceLROnPlateau', mode, monitor='val_loss', factor, patience, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]
Parameters:
  • type (Literal['ReduceLROnPlateau'])

  • mode (Literal['min', 'max'])

  • monitor (str)

  • factor (float)

  • patience (int)

  • threshold (float)

  • threshold_mode (Literal['rel', 'abs'])

  • cooldown (int)

  • min_lr (float)

  • eps (float)

type: Literal['ReduceLROnPlateau']

Type of the learning rate scheduler.

mode: Literal['min', 'max']

One of {“min”, “max”}. Determines when to reduce the learning rate.

monitor: str

Quantity to be monitored.

factor: float

Factor by which the learning rate will be reduced.

patience: int

Number of epochs with no improvement after which learning rate will be reduced.

threshold: float

Threshold for measuring the new optimum.

threshold_mode: Literal['rel', 'abs']

One of {“rel”, “abs”}. Determines the threshold mode.

cooldown: int

Number of epochs to wait before resuming normal operation.

min_lr: float

A lower bound on the learning rate.

eps: float

Threshold for testing the new optimum.