PyTorch is an optimized tensor library for deep learning using CPUs and GPUs. PyTorch has a rich set of packages which are used to perform deep learning concepts. These packages help us in optimization, conversion, and loss calculation, etc. Let's get a brief knowledge of these packages.
||The torch package includes data structure for multi-dimensional tensors and mathematical operation over these are defined.
||This package is a multi-dimensional matrix which contains an element of a single data type.
||It is an object which represents the datatype of thetorch.Tensor.
||It is an object that represents the device on which torch.Tensor will be allocated.
||It is an object which represents a memory layout of a toch.Tensor.
||The numerical properties of a torch.dtype will be accessed through either the torch.iinfo or the torch.finfo.
||It is an object which represents the numerical properties of a floating-point torch.dtype.
||It is an object which represents the numerical properties of an integer torch.dtype.
||Torch supports sparse tensors in COO (rdinate) format, which will efficiently store and process tensors for which the majority of elements are zero.
||Torch supports for CUDA tensor types which implement the same function as CPU tensors, but for computation they utilize GPUs.
||A torch.Storage is a contiguous, one-dimensional array of a single data type.
||This package provides us many more classes and modules to implement and train the neural network.
||This package has functional classes which are similar to torch.nn.
||This package is used to implement various optimization algorithm.
||This package provides classes and functions to implement automatic differentiation of arbitrary scalar value functions.
||This package supports three backends and each one is with different capabilities.
||This package allows us to construct the stochastic computation graphs, and stochastic gradient estimators for optimization
||It is a pre-trained model repository which is designed to facilitate research reproducibility.
||It is a wrapper around the native multiprocessing module.
||It is a tool which can be used as an initial step for debugging bottlenecks in our program.
||It is used to create checkpoint in our source program.
||It is used to create the extension of C++, CUDA, and other languages.
||This package is mainly used for creating the dataset.
||It will use to decode the Dlpack into tensor.
||The ONNX exporter is a trace-based exporter, which means that it operates by executing your model once and exporting the operators which were actually run during this run