mattertune.backbones.eqV2.model
Classes
|
|
|
|
|
Configuration for converting ASE Atoms to a graph for the FAIRChem model. |
- class mattertune.backbones.eqV2.model.FAIRChemAtomsToGraphSystemConfig(*, radius, max_num_neighbors)[source]
Configuration for converting ASE Atoms to a graph for the FAIRChem model.
- Parameters:
radius (float)
max_num_neighbors (int)
- radius: float
The radius for edge construction.
- max_num_neighbors: int
The maximum number of neighbours each node can send messages to.
- class mattertune.backbones.eqV2.model.EqV2BackboneConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='eqV2', checkpoint_path, atoms_to_graph)[source]
- Parameters:
properties (Sequence[PropertyConfig])
optimizer (OptimizerConfig)
lr_scheduler (LRSchedulerConfig | None)
ignore_gpu_batch_transform_error (bool)
normalizers (Mapping[str, Sequence[NormalizerConfig]])
name (Literal['eqV2'])
checkpoint_path (Path | CachedPath)
atoms_to_graph (FAIRChemAtomsToGraphSystemConfig)
- name: Literal['eqV2']
The type of the backbone.
- checkpoint_path: Path | CE.CachedPath
The path to the checkpoint to load.
- atoms_to_graph: FAIRChemAtomsToGraphSystemConfig
Configuration for converting ASE Atoms to a graph.
- class mattertune.backbones.eqV2.model.EqV2BackboneModule(hparams)[source]
- Parameters:
hparams (TFinetuneModuleConfig)
- requires_disabled_inference_mode()[source]
Whether the model requires inference mode to be disabled.
- create_model()[source]
Initialize both the pre-trained backbone and the output heads for the properties to predict.
You should also construct any other
nn.Module
instances necessary for the forward pass here.
- model_forward_context(data)[source]
Context manager for the model forward pass.
This is used for any setup that needs to be done before the forward pass, e.g., setting pos.requires_grad_() for gradient-based force prediction.
- model_forward(batch, return_backbone_output=False)[source]
Forward pass of the model.
- Parameters:
batch – Input batch.
return_backbone_output – Whether to return the output of the backbone model.
- Returns:
Prediction of the model.
- cpu_data_transform(data)[source]
Transform data (on the CPU) before being batched and sent to the GPU.
- gpu_batch_transform(batch)[source]
Transform batch (on the GPU) before being fed to the model.
This will mainly be used to compute the (radius or knn) graph from the atomic positions.
- batch_to_labels(batch)[source]
Extract ground truth values from a batch. The output of this function should be a dictionary with keys corresponding to the target names and values corresponding to the ground truth values. The values should be torch tensors that match, in shape, the output of the corresponding output head.
- atoms_to_data(atoms, has_labels)[source]
Convert an ASE atoms object to a data object. This is used to convert the input data to the format expected by the model.
- Parameters:
atoms – ASE atoms object.
has_labels – Whether the atoms object contains labels.
- create_normalization_context_from_batch(batch)[source]
Create a normalization context from a batch. This is used to normalize and denormalize the properties.
The normalization context contains all the information required to normalize and denormalize the properties. Currently, this only includes the compositions of the materials in the batch. The compositions should be provided as an integer tensor of shape (batch_size, num_elements), where each row (i.e., compositions[i]) corresponds to the composition vector of the i-th material in the batch.
The composition vector is a vector that maps each element to the number of atoms of that element in the material. For example, compositions[:, 1] corresponds to the number of Hydrogen atoms in each material in the batch, compositions[:, 2] corresponds to the number of Helium atoms, and so on.
- Parameters:
batch – Input batch.
- Returns:
Normalization context.