mattertune.configs.finetune

class mattertune.configs.finetune.AdamConfig(*, name='Adam', lr, eps=1e-08, betas=(0.9, 0.999), weight_decay=0.0, amsgrad=False)[source]
Parameters:
  • name (Literal['Adam'])

  • lr (Annotated[float, Gt(gt=0)])

  • eps (Annotated[float, Ge(ge=0)])

  • betas (tuple[Annotated[float, Gt(gt=0)], Annotated[float, Gt(gt=0)]])

  • weight_decay (Annotated[float, Ge(ge=0)])

  • amsgrad (bool)

name: Literal['Adam']

name of the optimizer.

lr: C.PositiveFloat

Learning rate.

eps: C.NonNegativeFloat

Epsilon.

betas: tuple[C.PositiveFloat, C.PositiveFloat]

Betas.

weight_decay: C.NonNegativeFloat

Weight decay.

amsgrad: bool

Whether to use AMSGrad variant of Adam.

class mattertune.configs.finetune.AdamWConfig(*, name='AdamW', lr, eps=1e-08, betas=(0.9, 0.999), weight_decay=0.01, amsgrad=False)[source]
Parameters:
  • name (Literal['AdamW'])

  • lr (Annotated[float, Gt(gt=0)])

  • eps (Annotated[float, Ge(ge=0)])

  • betas (tuple[Annotated[float, Gt(gt=0)], Annotated[float, Gt(gt=0)]])

  • weight_decay (Annotated[float, Ge(ge=0)])

  • amsgrad (bool)

name: Literal['AdamW']

name of the optimizer.

lr: C.PositiveFloat

Learning rate.

eps: C.NonNegativeFloat

Epsilon.

betas: tuple[C.PositiveFloat, C.PositiveFloat]

Betas.

weight_decay: C.NonNegativeFloat

Weight decay.

amsgrad: bool

Whether to use AMSGrad variant of Adam.

class mattertune.configs.finetune.CosineAnnealingLRConfig(*, type='CosineAnnealingLR', T_max, eta_min=0, last_epoch=-1)[source]
Parameters:
  • type (Literal['CosineAnnealingLR'])

  • T_max (int)

  • eta_min (float)

  • last_epoch (int)

type: Literal['CosineAnnealingLR']

Type of the learning rate scheduler.

T_max: int

Maximum number of iterations.

eta_min: float

Minimum learning rate.

last_epoch: int

The index of last epoch.

class mattertune.configs.finetune.EnergyPropertyConfig(*, name='energy', dtype='float', loss, loss_coefficient=1.0, type='energy')[source]
Parameters:
  • name (str)

  • dtype (DType)

  • loss (LossConfig)

  • loss_coefficient (float)

  • type (Literal['energy'])

type: Literal['energy']
name: str

The name of the property.

This is the key that will be used to access the property in the output of the model.

dtype: DType

The type of the property values.

from_ase_atoms(atoms)[source]

Extract the property value from an ASE Atoms object.

ase_calculator_property_name()[source]

If this property can be calculated by an ASE calculator, returns the name of the property that the ASE calculator uses. Otherwise, returns None.

This should only return non-None for properties that are supported by the ASE calculator interface, i.e.: - ‘energy’ - ‘forces’ - ‘stress’ - ‘dipole’ - ‘charges’ - ‘magmom’ - ‘magmoms’

Note that this does not refer to the new experimental custom property prediction support feature in ASE, but rather the built-in properties that ASE can calculate in the ase.calculators.calculator.Calculator class.

prepare_value_for_ase_calculator(value)[source]

Convert the property value to a format that can be used by the ASE calculator.

property_type()[source]
loss: LossConfig

The loss function to use when training the model on this property.

loss_coefficient: float

The coefficient to apply to this property’s loss function when training the model.

class mattertune.configs.finetune.ExponentialConfig(*, type='ExponentialLR', gamma)[source]
Parameters:
  • type (Literal['ExponentialLR'])

  • gamma (float)

type: Literal['ExponentialLR']

Type of the learning rate scheduler.

gamma: float

Multiplicative factor of learning rate decay.

class mattertune.configs.finetune.FinetuneModuleBaseConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={})[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

abstract classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

abstract create_model()[source]

Creates an instance of the finetune module for this configuration.

Return type:

FinetuneModuleBase

class mattertune.configs.finetune.ForcesPropertyConfig(*, name='forces', dtype='float', loss, loss_coefficient=1.0, type='forces', conservative)[source]
Parameters:
  • name (str)

  • dtype (DType)

  • loss (LossConfig)

  • loss_coefficient (float)

  • type (Literal['forces'])

  • conservative (bool)

type: Literal['forces']
name: str

The name of the property.

This is the key that will be used to access the property in the output of the model.

dtype: DType

The type of the property values.

conservative: bool

Whether the forces are energy conserving.

This is used by the backbone to decide the type of output head to use for this property. Conservative force predictions are computed by taking the negative gradient of the energy with respect to the atomic positions, whereas non-conservative forces may be computed by other means.

from_ase_atoms(atoms)[source]

Extract the property value from an ASE Atoms object.

ase_calculator_property_name()[source]

If this property can be calculated by an ASE calculator, returns the name of the property that the ASE calculator uses. Otherwise, returns None.

This should only return non-None for properties that are supported by the ASE calculator interface, i.e.: - ‘energy’ - ‘forces’ - ‘stress’ - ‘dipole’ - ‘charges’ - ‘magmom’ - ‘magmoms’

Note that this does not refer to the new experimental custom property prediction support feature in ASE, but rather the built-in properties that ASE can calculate in the ase.calculators.calculator.Calculator class.

loss: LossConfig

The loss function to use when training the model on this property.

loss_coefficient: float

The coefficient to apply to this property’s loss function when training the model.

property_type()[source]
class mattertune.configs.finetune.GraphPropertyConfig(*, name, dtype, loss, loss_coefficient=1.0, type='graph_property', reduction)[source]
Parameters:
  • name (str)

  • dtype (DType)

  • loss (LossConfig)

  • loss_coefficient (float)

  • type (Literal['graph_property'])

  • reduction (Literal['mean', 'sum', 'max'])

type: Literal['graph_property']
reduction: Literal['mean', 'sum', 'max']

The reduction to use for the output. - “sum”: Sum the property values for all atoms in the system. This is optimal for extensive properties (e.g. energy). - “mean”: Take the mean of the property values for all atoms in the system. This is optimal for intensive properties (e.g. density). - “max”: Take the maximum of the property values for all atoms in the system. This is optimal for properties like the last phdos peak of Matbench’s phonons dataset.

from_ase_atoms(atoms)[source]

Extract the property value from an ASE Atoms object.

ase_calculator_property_name()[source]

If this property can be calculated by an ASE calculator, returns the name of the property that the ASE calculator uses. Otherwise, returns None.

This should only return non-None for properties that are supported by the ASE calculator interface, i.e.: - ‘energy’ - ‘forces’ - ‘stress’ - ‘dipole’ - ‘charges’ - ‘magmom’ - ‘magmoms’

Note that this does not refer to the new experimental custom property prediction support feature in ASE, but rather the built-in properties that ASE can calculate in the ase.calculators.calculator.Calculator class.

property_type()[source]
name: str

The name of the property.

This is the key that will be used to access the property in the output of the model.

This is also the key that will be used to access the property in the ASE Atoms object.

dtype: DType

The type of the property values.

loss: LossConfig

The loss function to use when training the model on this property.

loss_coefficient: float

The coefficient to apply to this property’s loss function when training the model.

class mattertune.configs.finetune.HuberLossConfig(*, name='huber', delta=1.0, reduction='mean')[source]
Parameters:
  • name (Literal['huber'])

  • delta (float)

  • reduction (Literal['mean', 'sum'])

name: Literal['huber']
delta: float

The threshold value for the Huber loss function.

reduction: Literal['mean', 'sum']

How to reduce the loss values across the batch.

  • "mean": The mean of the loss values.

  • "sum": The sum of the loss values.

class mattertune.configs.finetune.L2MAELossConfig(*, name='l2_mae', reduction='mean')[source]
Parameters:
  • name (Literal['l2_mae'])

  • reduction (Literal['mean', 'sum'])

name: Literal['l2_mae']
reduction: Literal['mean', 'sum']

How to reduce the loss values across the batch.

  • "mean": The mean of the loss values.

  • "sum": The sum of the loss values.

class mattertune.configs.finetune.MAELossConfig(*, name='mae', reduction='mean')[source]
Parameters:
  • name (Literal['mae'])

  • reduction (Literal['mean', 'sum'])

name: Literal['mae']
reduction: Literal['mean', 'sum']

How to reduce the loss values across the batch.

  • "mean": The mean of the loss values.

  • "sum": The sum of the loss values.

class mattertune.configs.finetune.MSELossConfig(*, name='mse', reduction='mean')[source]
Parameters:
  • name (Literal['mse'])

  • reduction (Literal['mean', 'sum'])

name: Literal['mse']
reduction: Literal['mean', 'sum']

How to reduce the loss values across the batch.

  • "mean": The mean of the loss values.

  • "sum": The sum of the loss values.

class mattertune.configs.finetune.MultiStepLRConfig(*, type='MultiStepLR', milestones, gamma)[source]
Parameters:
  • type (Literal['MultiStepLR'])

  • milestones (list[int])

  • gamma (float)

type: Literal['MultiStepLR']

Type of the learning rate scheduler.

milestones: list[int]

List of epoch indices. Must be increasing.

gamma: float

Multiplicative factor of learning rate decay.

class mattertune.configs.finetune.PropertyConfigBase(*, name, dtype, loss, loss_coefficient=1.0)[source]
Parameters:
  • name (str)

  • dtype (DType)

  • loss (LossConfig)

  • loss_coefficient (float)

name: str

The name of the property.

This is the key that will be used to access the property in the output of the model.

This is also the key that will be used to access the property in the ASE Atoms object.

dtype: DType

The type of the property values.

loss: LossConfig

The loss function to use when training the model on this property.

loss_coefficient: float

The coefficient to apply to this property’s loss function when training the model.

abstract from_ase_atoms(atoms)[source]

Extract the property value from an ASE Atoms object.

Parameters:

atoms (Atoms)

Return type:

int | float | ndarray | Tensor

classmethod metric_cls()[source]
Return type:

type[MetricBase]

abstract ase_calculator_property_name()[source]

If this property can be calculated by an ASE calculator, returns the name of the property that the ASE calculator uses. Otherwise, returns None.

This should only return non-None for properties that are supported by the ASE calculator interface, i.e.: - ‘energy’ - ‘forces’ - ‘stress’ - ‘dipole’ - ‘charges’ - ‘magmom’ - ‘magmoms’

Note that this does not refer to the new experimental custom property prediction support feature in ASE, but rather the built-in properties that ASE can calculate in the ase.calculators.calculator.Calculator class.

Return type:

ASECalculatorPropertyName | None

abstract property_type()[source]
Return type:

Literal[‘system’, ‘atom’]

prepare_value_for_ase_calculator(value)[source]

Convert the property value to a format that can be used by the ASE calculator.

Parameters:

value (float | ndarray)

class mattertune.configs.finetune.ReduceOnPlateauConfig(*, type='ReduceLROnPlateau', mode, factor, patience, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]
Parameters:
  • type (Literal['ReduceLROnPlateau'])

  • mode (str)

  • factor (float)

  • patience (int)

  • threshold (float)

  • threshold_mode (str)

  • cooldown (int)

  • min_lr (float)

  • eps (float)

type: Literal['ReduceLROnPlateau']

Type of the learning rate scheduler.

mode: str

One of {“min”, “max”}. Determines when to reduce the learning rate.

factor: float

Factor by which the learning rate will be reduced.

patience: int

Number of epochs with no improvement after which learning rate will be reduced.

threshold: float

Threshold for measuring the new optimum.

threshold_mode: str

One of {“rel”, “abs”}. Determines the threshold mode.

cooldown: int

Number of epochs to wait before resuming normal operation.

min_lr: float

A lower bound on the learning rate.

eps: float

Threshold for testing the new optimum.

class mattertune.configs.finetune.SGDConfig(*, name='SGD', lr, momentum=0.0, weight_decay=0.0, nestrov=False)[source]
Parameters:
  • name (Literal['SGD'])

  • lr (Annotated[float, Gt(gt=0)])

  • momentum (Annotated[float, Ge(ge=0)])

  • weight_decay (Annotated[float, Ge(ge=0)])

  • nestrov (bool)

name: Literal['SGD']

name of the optimizer.

lr: C.PositiveFloat

Learning rate.

momentum: C.NonNegativeFloat

Momentum.

weight_decay: C.NonNegativeFloat

Weight decay.

nestrov: bool

Whether to use nestrov.

class mattertune.configs.finetune.StepLRConfig(*, type='StepLR', step_size, gamma)[source]
Parameters:
  • type (Literal['StepLR'])

  • step_size (int)

  • gamma (float)

type: Literal['StepLR']

Type of the learning rate scheduler.

step_size: int

Period of learning rate decay.

gamma: float

Multiplicative factor of learning rate decay.

class mattertune.configs.finetune.StressesPropertyConfig(*, name='stresses', dtype='float', loss, loss_coefficient=1.0, type='stresses', conservative)[source]
Parameters:
  • name (str)

  • dtype (DType)

  • loss (LossConfig)

  • loss_coefficient (float)

  • type (Literal['stresses'])

  • conservative (bool)

loss: LossConfig

The loss function to use when training the model on this property.

loss_coefficient: float

The coefficient to apply to this property’s loss function when training the model.

type: Literal['stresses']
name: str

The name of the property.

This is the key that will be used to access the property in the output of the model.

dtype: DType

The type of the property values.

conservative: bool

Similar to the conservative parameter in ForcesPropertyConfig, this parameter specifies whether the stresses should be computed in a conservative manner.

from_ase_atoms(atoms)[source]

Extract the property value from an ASE Atoms object.

ase_calculator_property_name()[source]

If this property can be calculated by an ASE calculator, returns the name of the property that the ASE calculator uses. Otherwise, returns None.

This should only return non-None for properties that are supported by the ASE calculator interface, i.e.: - ‘energy’ - ‘forces’ - ‘stress’ - ‘dipole’ - ‘charges’ - ‘magmom’ - ‘magmoms’

Note that this does not refer to the new experimental custom property prediction support feature in ASE, but rather the built-in properties that ASE can calculate in the ase.calculators.calculator.Calculator class.

prepare_value_for_ase_calculator(value)[source]

Convert the property value to a format that can be used by the ASE calculator.

property_type()[source]

Modules

base

loss

lr_scheduler

optimizer

properties