mattertune.backbones.mattersim.model
Classes
|
|
|
Configuration for the graph converter used in the MatterSim backbone. |
|
- class mattertune.backbones.mattersim.model.MatterSimGraphConvertorConfig(*, twobody_cutoff=5.0, has_threebody=True, threebody_cutoff=4.0)[source]
Configuration for the graph converter used in the MatterSim backbone.
- Parameters:
twobody_cutoff (float)
has_threebody (bool)
threebody_cutoff (float)
- twobody_cutoff: float
The cutoff distance for the two-body interactions.
- has_threebody: bool
Whether to include three-body interactions.
- threebody_cutoff: float
The cutoff distance for the three-body interactions.
- class mattertune.backbones.mattersim.model.MatterSimBackboneConfig(*, reset_backbone=False, freeze_backbone=False, reset_output_heads=True, use_pretrained_normalizers=False, output_internal_features=False, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='mattersim', pretrained_model, model_type='m3gnet', graph_convertor)[source]
- Parameters:
reset_backbone (bool)
freeze_backbone (bool)
reset_output_heads (bool)
use_pretrained_normalizers (bool)
output_internal_features (bool)
properties (Sequence[PropertyConfig])
optimizer (OptimizerConfig)
lr_scheduler (LRSchedulerConfig | None)
ignore_gpu_batch_transform_error (bool)
normalizers (Mapping[str, Sequence[NormalizerConfig]])
name (Literal['mattersim'])
pretrained_model (str)
model_type (Literal['m3gnet', 'graphormer'])
graph_convertor (MatterSimGraphConvertorConfig | dict[str, Any])
- name: Literal['mattersim']
The type of the backbone.
- pretrained_model: str
The name of the pretrained model to load. MatterSim-v1.0.0-1M: A mini version of the m3gnet that is faster to run. MatterSim-v1.0.0-5M: A larger version of the m3gnet that is more accurate.
- model_type: Literal['m3gnet', 'graphormer']
- graph_convertor: MatterSimGraphConvertorConfig | dict[str, Any]
Configuration for the graph converter.
- class mattertune.backbones.mattersim.model.MatterSimM3GNetBackboneModule(hparams)[source]
- Parameters:
hparams (TFinetuneModuleConfig)
- requires_disabled_inference_mode()[source]
Whether the model requires inference mode to be disabled.
- setup(stage)[source]
Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.
- Parameters:
stage (str) – either
'fit'
,'validate'
,'test'
, or'predict'
Example:
class LitModel(...): def __init__(self): self.l1 = None def prepare_data(self): download_data() tokenize() # don't do this self.something = else def setup(self, stage): data = load_data(...) self.l1 = nn.Linear(28, data.num_classes)
- create_model()[source]
Initialize both the pre-trained backbone and the output heads for the properties to predict.
You should also construct any other
nn.Module
instances necessary for the forward pass here.
- model_forward_context(data, mode)[source]
Context manager for the model forward pass.
This is used for any setup that needs to be done before the forward pass, e.g., setting pos.requires_grad_() for gradient-based force prediction.
- Parameters:
mode (str)
- model_forward(batch, mode, return_backbone_output=False)[source]
Forward pass of the model.
- Parameters:
batch (Batch) – Input batch.
return_backbone_output (bool) – Whether to return the output of the backbone model.
mode (str)
- Returns:
Prediction of the model.
- cpu_data_transform(data)[source]
Transform data (on the CPU) before being batched and sent to the GPU.
- gpu_batch_transform(batch)[source]
Transform batch (on the GPU) before being fed to the model.
This will mainly be used to compute the (radius or knn) graph from the atomic positions.
- batch_to_labels(batch)[source]
Extract ground truth values from a batch. The output of this function should be a dictionary with keys corresponding to the target names and values corresponding to the ground truth values. The values should be torch tensors that match, in shape, the output of the corresponding output head.
- atoms_to_data(atoms, has_labels)[source]
Convert an ASE atoms object to a data object. This is used to convert the input data to the format expected by the model.
- Parameters:
atoms – ASE atoms object.
has_labels – Whether the atoms object contains labels.
- create_normalization_context_from_batch(batch)[source]
Create a normalization context from a batch. This is used to normalize and denormalize the properties.
The normalization context contains all the information required to normalize and denormalize the properties. Currently, this only includes the compositions of the materials in the batch. The compositions should be provided as an integer tensor of shape (batch_size, num_elements), where each row (i.e., compositions[i]) corresponds to the composition vector of the i-th material in the batch.
The composition vector is a vector that maps each element to the number of atoms of that element in the material. For example, compositions[:, 1] corresponds to the number of Hydrogen atoms in each material in the batch, compositions[:, 2] corresponds to the number of Helium atoms, and so on.
- Parameters:
batch – Input batch.
- Returns:
Normalization context.
- optimizer_step(epoch, batch_idx, optimizer, optimizer_closure=None)[source]
Override this method to adjust the default way the
Trainer
calls the optimizer.By default, Lightning calls
step()
andzero_grad()
as shown in the example. This method (andzero_grad()
) won’t be called during the accumulation phase whenTrainer(accumulate_grad_batches != 1)
. Overriding this hook has no benefit with manual optimization.- Parameters:
epoch (int) – Current epoch
batch_idx (int) – Index of current batch
optimizer – A PyTorch optimizer
optimizer_closure – The optimizer closure. This closure must be executed as it includes the calls to
training_step()
,optimizer.zero_grad()
, andbackward()
.
Examples:
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure): # Add your custom logic to run directly before `optimizer.step()` optimizer.step(closure=optimizer_closure) # Add your custom logic to run directly after `optimizer.step()`
- apply_callable_to_backbone(fn)[source]
Apply a callable to the backbone model and return the result.
This is useful for applying functions to the backbone model that are not part of the standard forward pass. For example, this can be used to update structure or weights of the backbone model, e.g., for LoRA.
- Parameters:
fn – Callable to apply to the backbone model.
- Returns:
Result of the callable.