PySpark StorageLevel is used to decide how RDD should be stored in memory. It also determines the weather serialize RDD and weather to replicate RDD partitions. In Apache Spark, it is responsible for RDD should be saved in the memory or should it be stored over the disk, or in both. It contains commonly-used PySpark StorageLevels, static constant like MEMORY_ONLY.
The following code block consist the class definition of a StorageLevel-
There are different PySpark StorageLevels to decide the storage of RDD, such as:
Example of PySpark StorageLevel
Here we use the storage level Memory_And_Disk_2, which means RDD partition will have replication of 2.
Disk Memory Serialized 2x Replicated
Next TopicPySpark Profiler