mattertune.configs.backbones

class mattertune.configs.backbones.CutoffsConfig(*, main, aeaint, qint, aint)[source]
Parameters:
  • main (float)

  • aeaint (float)

  • qint (float)

  • aint (float)

main: float
aeaint: float
qint: float
aint: float
classmethod from_constant(value)[source]
Parameters:

value (float)

class mattertune.configs.backbones.EqV2BackboneConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='eqV2', checkpoint_path, atoms_to_graph)[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

  • name (Literal['eqV2'])

  • checkpoint_path (Path | CachedPath)

  • atoms_to_graph (FAIRChemAtomsToGraphSystemConfig)

name: Literal['eqV2']

The type of the backbone.

checkpoint_path: Path | CE.CachedPath

The path to the checkpoint to load.

atoms_to_graph: FAIRChemAtomsToGraphSystemConfig

Configuration for converting ASE Atoms to a graph.

classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

create_model()[source]

Creates an instance of the finetune module for this configuration.

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

class mattertune.configs.backbones.FAIRChemAtomsToGraphSystemConfig(*, radius, max_num_neighbors)[source]

Configuration for converting ASE Atoms to a graph for the FAIRChem model.

Parameters:
  • radius (float)

  • max_num_neighbors (int)

radius: float

The radius for edge construction.

max_num_neighbors: int

The maximum number of neighbours each node can send messages to.

class mattertune.configs.backbones.FinetuneModuleBaseConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={})[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

abstract classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

abstract create_model()[source]

Creates an instance of the finetune module for this configuration.

Return type:

FinetuneModuleBase

class mattertune.configs.backbones.JMPBackboneConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='jmp', ckpt_path, graph_computer)[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

  • name (Literal['jmp'])

  • ckpt_path (Path | CachedPath)

  • graph_computer (JMPGraphComputerConfig)

name: Literal['jmp']

The type of the backbone.

ckpt_path: Path | CE.CachedPath

The path to the pre-trained model checkpoint.

graph_computer: JMPGraphComputerConfig

The configuration for the graph computer.

create_model()[source]

Creates an instance of the finetune module for this configuration.

classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

class mattertune.configs.backbones.JMPGraphComputerConfig(*, pbc, cutoffs=CutoffsConfig(main=12.0, aeaint=12.0, qint=12.0, aint=12.0), max_neighbors=MaxNeighborsConfig(main=30, aeaint=20, qint=8, aint=1000), per_graph_radius_graph=False)[source]
Parameters:
pbc: bool

Whether to use periodic boundary conditions.

cutoffs: CutoffsConfig

The cutoff for the radius graph.

max_neighbors: MaxNeighborsConfig

The maximum number of neighbors for the radius graph.

per_graph_radius_graph: bool

Whether to compute the radius graph per graph.

class mattertune.configs.backbones.M3GNetBackboneConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='m3gnet', ckpt_path, graph_computer)[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

  • name (Literal['m3gnet'])

  • ckpt_path (str | Path)

  • graph_computer (M3GNetGraphComputerConfig)

name: Literal['m3gnet']

The type of the backbone.

ckpt_path: str | Path

The path to the pre-trained model checkpoint.

graph_computer: M3GNetGraphComputerConfig

Configuration for the graph computer.

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

create_model()[source]

Creates an instance of the finetune module for this configuration.

classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

class mattertune.configs.backbones.M3GNetGraphComputerConfig(*, element_types=<factory>, cutoff=None, threebody_cutoff=None, pre_compute_line_graph=False, graph_labels=None)[source]

Configuration for initialize a MatGL Atoms2Graph Convertor.

Parameters:
  • element_types (tuple[str, ...])

  • cutoff (float | None)

  • threebody_cutoff (float | None)

  • pre_compute_line_graph (bool)

  • graph_labels (list[int | float] | None)

element_types: tuple[str, ...]

The element types to consider, default is all elements.

cutoff: float | None

The cutoff distance for the neighbor list. If None, the cutoff is loaded from the checkpoint.

threebody_cutoff: float | None

The cutoff distance for the three-body interactions. If None, the cutoff is loaded from the checkpoint.

pre_compute_line_graph: bool

Whether to pre-compute the line graph for three-body interactions in data preparation.

graph_labels: list[int | float] | None

The graph labels to consider, default is None.

class mattertune.configs.backbones.MaxNeighborsConfig(*, main, aeaint, qint, aint)[source]
Parameters:
  • main (int)

  • aeaint (int)

  • qint (int)

  • aint (int)

main: int
aeaint: int
qint: int
aint: int
classmethod from_goc_base_proportions(max_neighbors)[source]
GOC base proportions:

max_neighbors: 30 max_neighbors_qint: 8 max_neighbors_aeaint: 20 max_neighbors_aint: 1000

Parameters:

max_neighbors (int)

class mattertune.configs.backbones.ORBBackboneConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='orb', pretrained_model, system=ORBSystemConfig(radius=10.0, max_num_neighbors=20))[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

  • name (Literal['orb'])

  • pretrained_model (str)

  • system (ORBSystemConfig)

name: Literal['orb']

The type of the backbone.

pretrained_model: str

The name of the pretrained model to load.

system: ORBSystemConfig

The system configuration, controlling how to featurize a system of atoms.

create_model()[source]

Creates an instance of the finetune module for this configuration.

classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

properties: Sequence[PropertyConfig]

Properties to predict.

optimizer: OptimizerConfig

Optimizer.

lr_scheduler: LRSchedulerConfig | None

Learning Rate Scheduler

ignore_gpu_batch_transform_error: bool

Whether to ignore data processing errors during training.

normalizers: Mapping[str, Sequence[NormalizerConfig]]

Normalizers for the properties.

Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.

The normalizers are applied in the order they are defined in the list.

class mattertune.configs.backbones.ORBSystemConfig(*, radius, max_num_neighbors)[source]

Config controlling how to featurize a system of atoms.

Parameters:
  • radius (float)

  • max_num_neighbors (int)

radius: float

The radius for edge construction.

max_num_neighbors: int

The maximum number of neighbours each node can send messages to.

Modules

eqV2

jmp

m3gnet

orb