mattertune.finetune.lr_scheduler
Functions
|
|
|
Classes
|
|
|
|
|
|
|
|
|
|
|
|
|
- class mattertune.finetune.lr_scheduler.StepLRConfig(*, type='StepLR', step_size, gamma)[source]
- Parameters:
type (Literal['StepLR'])
step_size (int)
gamma (float)
- type: Literal['StepLR']
Type of the learning rate scheduler.
- step_size: int
Period of learning rate decay.
- gamma: float
Multiplicative factor of learning rate decay.
- class mattertune.finetune.lr_scheduler.MultiStepLRConfig(*, type='MultiStepLR', milestones, gamma)[source]
- Parameters:
type (Literal['MultiStepLR'])
milestones (list[int])
gamma (float)
- type: Literal['MultiStepLR']
Type of the learning rate scheduler.
- milestones: list[int]
List of epoch indices. Must be increasing.
- gamma: float
Multiplicative factor of learning rate decay.
- class mattertune.finetune.lr_scheduler.ExponentialConfig(*, type='ExponentialLR', gamma)[source]
- Parameters:
type (Literal['ExponentialLR'])
gamma (float)
- type: Literal['ExponentialLR']
Type of the learning rate scheduler.
- gamma: float
Multiplicative factor of learning rate decay.
- class mattertune.finetune.lr_scheduler.ReduceOnPlateauConfig(*, type='ReduceLROnPlateau', mode, monitor='val_loss', factor, patience, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]
- Parameters:
type (Literal['ReduceLROnPlateau'])
mode (Literal['min', 'max'])
monitor (str)
factor (float)
patience (int)
threshold (float)
threshold_mode (Literal['rel', 'abs'])
cooldown (int)
min_lr (float)
eps (float)
- type: Literal['ReduceLROnPlateau']
Type of the learning rate scheduler.
- mode: Literal['min', 'max']
One of {“min”, “max”}. Determines when to reduce the learning rate.
- monitor: str
Quantity to be monitored.
- factor: float
Factor by which the learning rate will be reduced.
- patience: int
Number of epochs with no improvement after which learning rate will be reduced.
- threshold: float
Threshold for measuring the new optimum.
- threshold_mode: Literal['rel', 'abs']
One of {“rel”, “abs”}. Determines the threshold mode.
- cooldown: int
Number of epochs to wait before resuming normal operation.
- min_lr: float
A lower bound on the learning rate.
- eps: float
Threshold for testing the new optimum.
- class mattertune.finetune.lr_scheduler.CosineAnnealingLRConfig(*, type='CosineAnnealingLR', T_max, eta_min=0, last_epoch=-1)[source]
- Parameters:
type (Literal['CosineAnnealingLR'])
T_max (int)
eta_min (float)
last_epoch (int)
- type: Literal['CosineAnnealingLR']
Type of the learning rate scheduler.
- T_max: int
Maximum number of iterations.
- eta_min: float
Minimum learning rate.
- last_epoch: int
The index of last epoch.
- class mattertune.finetune.lr_scheduler.ConstantLRConfig(*, type='ConstantLR', factor=0.3333333333333333, total_iters=5)[source]
- Parameters:
type (Literal['ConstantLR'])
factor (float)
total_iters (int)
- type: Literal['ConstantLR']
Type of the learning rate scheduler.
- factor: float
The number we multiply learning rate until the milestone.
- total_iters: int
The number of steps that the scheduler decays the learning rate.
- class mattertune.finetune.lr_scheduler.LinearLRConfig(*, type='LinearLR', start_factor=0.3333333333333333, end_factor=1.0, total_iters=5)[source]
- Parameters:
type (Literal['LinearLR'])
start_factor (float)
end_factor (float)
total_iters (int)
- type: Literal['LinearLR']
Type of the learning rate scheduler.
- start_factor: float
The number we multiply learning rate in the first epoch.
- end_factor: float
The number we multiply learning rate at the end of linear changing process.
- total_iters: int
The number of iterations that multiplicative factor reaches to 1.