mattertune.configs.backbones.uma
- class mattertune.configs.backbones.uma.FAIRChemAtomsToGraphSystemConfig(*, radius, max_num_neighbors=None)[source]
Configuration for converting ASE Atoms to a graph for the FAIRChem model.
- Parameters:
radius (float)
max_num_neighbors (int | None)
- radius: float
The radius for edge construction.
- max_num_neighbors: int | None
The maximum number of neighbours each node can send messages to.
- class mattertune.configs.backbones.uma.FinetuneModuleBaseConfig(*, reset_backbone=False, freeze_backbone=False, reset_output_heads=True, use_pretrained_normalizers=False, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={})[source]
- Parameters:
reset_backbone (bool)
freeze_backbone (bool)
reset_output_heads (bool)
use_pretrained_normalizers (bool)
properties (Sequence[PropertyConfig])
optimizer (OptimizerConfig)
lr_scheduler (LRSchedulerConfig | None)
ignore_gpu_batch_transform_error (bool)
normalizers (Mapping[str, Sequence[NormalizerConfig]])
- reset_backbone: bool
Whether to reset the backbone of the model when creating the model.
- freeze_backbone: bool
Whether to freeze the backbone during training.
- reset_output_heads: bool
Whether to reset the output heads of the model when creating the model.
- use_pretrained_normalizers: bool
Whether to use the pretrained normalizers.
- properties: Sequence[PropertyConfig]
Properties to predict.
- optimizer: OptimizerConfig
Optimizer.
- lr_scheduler: LRSchedulerConfig | None
Learning Rate Scheduler
- ignore_gpu_batch_transform_error: bool
Whether to ignore data processing errors during training.
- normalizers: Mapping[str, Sequence[NormalizerConfig]]
Normalizers for the properties.
Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.
The normalizers are applied in the order they are defined in the list.
- abstract classmethod ensure_dependencies()[source]
Ensure that all dependencies are installed.
This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.
- class mattertune.configs.backbones.uma.UMABackboneConfig(*, reset_backbone=False, freeze_backbone=False, reset_output_heads=True, use_pretrained_normalizers=False, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='uma', model_name, atoms_to_graph=FAIRChemAtomsToGraphSystemConfig(radius=6.0, max_num_neighbors=None), task_name=None)[source]
- Parameters:
reset_backbone (bool)
freeze_backbone (bool)
reset_output_heads (bool)
use_pretrained_normalizers (bool)
properties (Sequence[PropertyConfig])
optimizer (OptimizerConfig)
lr_scheduler (LRSchedulerConfig | None)
ignore_gpu_batch_transform_error (bool)
normalizers (Mapping[str, Sequence[NormalizerConfig]])
name (Literal['uma'])
model_name (str)
atoms_to_graph (FAIRChemAtomsToGraphSystemConfig)
task_name (str | None)
- name: Literal['uma']
The name of the backbone model to use. Should be “uma”.
- model_name: str
The specific UMA model variant to use. Options include: - “uma-s-1” - “uma-s-1.1” - “uma-m-1.1” - “uma-l”
- atoms_to_graph: FAIRChemAtomsToGraphSystemConfig
Configuration for converting atomic data to graph representations.
- task_name: str | None
The task name for the dataset, e.g., ‘oc20’, ‘omol’, ‘omat’, ‘odac’, ‘omc’. If None, it will be inferred from the data.
- classmethod ensure_dependencies()[source]
Ensure that all dependencies are installed.
This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.
- reset_backbone: bool
Whether to reset the backbone of the model when creating the model.
- freeze_backbone: bool
Whether to freeze the backbone during training.
- reset_output_heads: bool
Whether to reset the output heads of the model when creating the model.
- use_pretrained_normalizers: bool
Whether to use the pretrained normalizers.
- properties: Sequence[PropertyConfig]
Properties to predict.
- optimizer: OptimizerConfig
Optimizer.
- lr_scheduler: LRSchedulerConfig | None
Learning Rate Scheduler
- ignore_gpu_batch_transform_error: bool
Whether to ignore data processing errors during training.
- normalizers: Mapping[str, Sequence[NormalizerConfig]]
Normalizers for the properties.
Any property can be associated with multiple normalizers. This is useful for cases where we want to normalize the same property in different ways. For example, we may want to normalize the energy by subtracting the atomic reference energies, as well as by mean and standard deviation normalization.
The normalizers are applied in the order they are defined in the list.
Modules