Skip to content

ding.torch_utils.backend_helper

ding.torch_utils.backend_helper

enable_tf32()

Overview

Enable tf32 on matmul and cudnn for faster computation. This only works on Ampere GPU devices. For detailed information, please refer to: https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices.

Full Source Code

../ding/torch_utils/backend_helper.py

1import torch 2 3 4def enable_tf32() -> None: 5 """ 6 Overview: 7 Enable tf32 on matmul and cudnn for faster computation. This only works on Ampere GPU devices. \ 8 For detailed information, please refer to: \ 9 https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices. 10 """ 11 torch.backends.cuda.matmul.allow_tf32 = True # allow tf32 on matmul 12 torch.backends.cudnn.allow_tf32 = True # allow tf32 on cudnn