mattertune.backbones.jmp.model

Classes

CutoffsConfig(*, main, aeaint, qint, aint)

JMPBackboneConfig(*, properties, optimizer)

JMPBackboneModule(hparams)

JMPGraphComputerConfig(*, pbc[, cutoffs, ...])

MaxNeighborsConfig(*, main, aeaint, qint, aint)

class mattertune.backbones.jmp.model.CutoffsConfig(*, main, aeaint, qint, aint)[source]
Parameters:
  • main (float)

  • aeaint (float)

  • qint (float)

  • aint (float)

main: float
aeaint: float
qint: float
aint: float
classmethod from_constant(value)[source]
Parameters:

value (float)

class mattertune.backbones.jmp.model.MaxNeighborsConfig(*, main, aeaint, qint, aint)[source]
Parameters:
  • main (int)

  • aeaint (int)

  • qint (int)

  • aint (int)

main: int
aeaint: int
qint: int
aint: int
classmethod from_goc_base_proportions(max_neighbors)[source]
GOC base proportions:

max_neighbors: 30 max_neighbors_qint: 8 max_neighbors_aeaint: 20 max_neighbors_aint: 1000

Parameters:

max_neighbors (int)

class mattertune.backbones.jmp.model.JMPGraphComputerConfig(*, pbc, cutoffs=CutoffsConfig(main=12.0, aeaint=12.0, qint=12.0, aint=12.0), max_neighbors=MaxNeighborsConfig(main=30, aeaint=20, qint=8, aint=1000), per_graph_radius_graph=False)[source]
Parameters:
pbc: bool

Whether to use periodic boundary conditions.

cutoffs: CutoffsConfig

The cutoff for the radius graph.

max_neighbors: MaxNeighborsConfig

The maximum number of neighbors for the radius graph.

per_graph_radius_graph: bool

Whether to compute the radius graph per graph.

class mattertune.backbones.jmp.model.JMPBackboneConfig(*, properties, optimizer, lr_scheduler=None, ignore_gpu_batch_transform_error=True, normalizers={}, name='jmp', ckpt_path, graph_computer)[source]
Parameters:
  • properties (Sequence[PropertyConfig])

  • optimizer (OptimizerConfig)

  • lr_scheduler (LRSchedulerConfig | None)

  • ignore_gpu_batch_transform_error (bool)

  • normalizers (Mapping[str, Sequence[NormalizerConfig]])

  • name (Literal['jmp'])

  • ckpt_path (Path | CachedPath)

  • graph_computer (JMPGraphComputerConfig)

name: Literal['jmp']

The type of the backbone.

ckpt_path: Path | CE.CachedPath

The path to the pre-trained model checkpoint.

graph_computer: JMPGraphComputerConfig

The configuration for the graph computer.

create_model()[source]

Creates an instance of the finetune module for this configuration.

classmethod ensure_dependencies()[source]

Ensure that all dependencies are installed.

This method should raise an exception if any dependencies are missing, with a message indicating which dependencies are missing and how to install them.

class mattertune.backbones.jmp.model.JMPBackboneModule(hparams)[source]
Parameters:

hparams (TFinetuneModuleConfig)

classmethod hparams_cls()[source]

Return the hyperparameters config class for this module.

requires_disabled_inference_mode()[source]

Whether the model requires inference mode to be disabled.

create_model()[source]

Initialize both the pre-trained backbone and the output heads for the properties to predict.

You should also construct any other nn.Module instances necessary for the forward pass here.

model_forward_context(data)[source]

Context manager for the model forward pass.

This is used for any setup that needs to be done before the forward pass, e.g., setting pos.requires_grad_() for gradient-based force prediction.

model_forward(batch, return_backbone_output=False)[source]

Forward pass of the model.

Parameters:
  • batch – Input batch.

  • return_backbone_output – Whether to return the output of the backbone model.

Returns:

Prediction of the model.

pretrained_backbone_parameters()[source]

Return the parameters of the backbone model.

output_head_parameters()[source]

Return the parameters of the output heads.

cpu_data_transform(data)[source]

Transform data (on the CPU) before being batched and sent to the GPU.

collate_fn(data_list)[source]

Collate function for the DataLoader

gpu_batch_transform(batch)[source]

Transform batch (on the GPU) before being fed to the model.

This will mainly be used to compute the (radius or knn) graph from the atomic positions.

batch_to_labels(batch)[source]

Extract ground truth values from a batch. The output of this function should be a dictionary with keys corresponding to the target names and values corresponding to the ground truth values. The values should be torch tensors that match, in shape, the output of the corresponding output head.

atoms_to_data(atoms, has_labels)[source]

Convert an ASE atoms object to a data object. This is used to convert the input data to the format expected by the model.

Parameters:
  • atoms – ASE atoms object.

  • has_labels – Whether the atoms object contains labels.

create_normalization_context_from_batch(batch)[source]

Create a normalization context from a batch. This is used to normalize and denormalize the properties.

The normalization context contains all the information required to normalize and denormalize the properties. Currently, this only includes the compositions of the materials in the batch. The compositions should be provided as an integer tensor of shape (batch_size, num_elements), where each row (i.e., compositions[i]) corresponds to the composition vector of the i-th material in the batch.

The composition vector is a vector that maps each element to the number of atoms of that element in the material. For example, compositions[:, 1] corresponds to the number of Hydrogen atoms in each material in the batch, compositions[:, 2] corresponds to the number of Helium atoms, and so on.

Parameters:

batch – Input batch.

Returns:

Normalization context.