Model Database's logo
Join the Model Database community

and get access to the augmented documentation experience

to get started

Learning Rate Schedulers

This page contains the API reference documentation for learning rate schedulers included in timm.

Schedulers

Factory functions

timm.scheduler.create_scheduler

< >

( args optimizer: Optimizer updates_per_epoch: int = 0 )

timm.scheduler.create_scheduler_v2

< >

( optimizer: Optimizer sched: str = 'cosine' num_epochs: int = 300 decay_epochs: int = 90 decay_milestones: typing.List[int] = (90, 180, 270) cooldown_epochs: int = 0 patience_epochs: int = 10 decay_rate: float = 0.1 min_lr: float = 0 warmup_lr: float = 1e-05 warmup_epochs: int = 0 warmup_prefix: bool = False noise: typing.Union[float, typing.List[float]] = None noise_pct: float = 0.67 noise_std: float = 1.0 noise_seed: int = 42 cycle_mul: float = 1.0 cycle_decay: float = 0.1 cycle_limit: int = 1 k_decay: float = 1.0 plateau_mode: str = 'max' step_on_epochs: bool = True updates_per_epoch: int = 0 )

Scheduler Classes

class timm.scheduler.CosineLRScheduler

< >

( optimizer: Optimizer t_initial: int lr_min: float = 0.0 cycle_mul: float = 1.0 cycle_decay: float = 1.0 cycle_limit: int = 1 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = False t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 k_decay = 1.0 initialize = True )

Cosine decay with restarts. This is described in the paper https://arxiv.org/abs/1608.03983.

Inspiration from https://github.com/allenai/allennlp/blob/master/allennlp/training/learning_rate_schedulers/cosine.py

k-decay option based on k-decay: A New Method For Learning Rate Schedule - https://arxiv.org/abs/2004.05909

class timm.scheduler.MultiStepLRScheduler

< >

( optimizer: Optimizer decay_t: typing.List[int] decay_rate: float = 1.0 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = True t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 initialize = True )

class timm.scheduler.PlateauLRScheduler

< >

( optimizer decay_rate = 0.1 patience_t = 10 verbose = True threshold = 0.0001 cooldown_t = 0 warmup_t = 0 warmup_lr_init = 0 lr_min = 0 mode = 'max' noise_range_t = None noise_type = 'normal' noise_pct = 0.67 noise_std = 1.0 noise_seed = None initialize = True )

Decay the LR by a factor every time the validation loss plateaus.

class timm.scheduler.PolyLRScheduler

< >

( optimizer: Optimizer t_initial: int power: float = 0.5 lr_min: float = 0.0 cycle_mul: float = 1.0 cycle_decay: float = 1.0 cycle_limit: int = 1 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = False t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 k_decay = 1.0 initialize = True )

Polynomial LR Scheduler w/ warmup, noise, and k-decay

k-decay option based on k-decay: A New Method For Learning Rate Schedule - https://arxiv.org/abs/2004.05909

class timm.scheduler.StepLRScheduler

< >

( optimizer: Optimizer decay_t: float decay_rate: float = 1.0 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = True t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 initialize = True )

class timm.scheduler.TanhLRScheduler

< >

( optimizer: Optimizer t_initial: int lb: float = -7.0 ub: float = 3.0 lr_min: float = 0.0 cycle_mul: float = 1.0 cycle_decay: float = 1.0 cycle_limit: int = 1 warmup_t = 0 warmup_lr_init = 0 warmup_prefix = False t_in_epochs = True noise_range_t = None noise_pct = 0.67 noise_std = 1.0 noise_seed = 42 initialize = True )

Hyberbolic-Tangent decay with restarts. This is described in the paper https://arxiv.org/abs/1806.01593