Javatpoint Logo
Javatpoint Logo

Hash Join Algorithm

The Hash Join algorithm is used to perform the natural join or equi join operations. The concept behind the Hash join algorithm is to partition the tuples of each given relation into sets. The partition is done on the basis of the same hash value on the join attributes. The hash function provides the hash value. The main goal of using the hash function in the algorithm is to reduce the number of comparisons and increase the efficiency to complete the join operation on the relations.

For example, suppose there are two tuples a and b where both of them satisfy the join condition. It means they have the same value for the join attributes. Suppose that both a and b tuples consist of a hash value as i. It implies that tuple a should be in ai, and tuple b should be in bi. Thus, we only compare a tuples in ai with b tuples of bi. We do not need to compare the b tuples in any other partition. Therefore, in this way, the hash join operation works.

Hash Join Algorithm

//Partition s//
for each tuple ts in s do begin
i = h(ts [JoinAttrs]);
Hsi = Hsi U {ts};
end
//Partition r//
for each tuple tr in r do begin
i = h(tr[JoinAttrs]);
Hri = Hri U {tr};
end
//Perform the join operation on each partition//
for i= 0 to nh do begin
	read Hsi and build an in-memory hash index on it;
	for each tuple tr in Hri do begin
               probe the hash index on Hsi to locate all tuples 
	       such that ts[JoinAttrs] = tr[JoinAttrs];
	    for each matching tuple ts in Hsi do begin
		add tr ⋈ ts to the result;
	    end
	end
end

It is the Hash join algorithm in which we have computed the natural join of two given relations r and s. In the algorithm, there are various terms used:

tr ⋈ ts: It defines the concatenation of the attributes of tuple tr and ts, which is further followed by projecting out the repeated attributes.

tr and ts: These are the tuples of relations r and s, respectively.

Let's understand the hash join algorithm with the following steps:

Step 1: In the algorithm, firstly, we have partitioned both relations r and s.

Step 2: After partitioning, we perform a separate indexed nested-loop join on each of the partition pairs i using for loop as i = 0 to nh.

Step 3: For performing the nested-loop join, it initially creates a hash index on each si and then probes with tuples from ri. In the algorithm, relation r is the probe input, and relation s is the build input.

There is a benefit of using the Hash Join algorithm i.e., the hash index on si is built-in memory, so for fetching the tuples, we do not need to access the disk. It is good to use smaller input relations as the build relations.

Recursive Partitioning in Hash Join

Recursive partitioning is the one in which the system repeats the partitioning of the input until each partition of the build input fits into the memory. The recursive partitioning is needed when the value of nh is greater than or equal to the number of memory blocks. It becomes difficult to split the relation in one pass since there can be insufficient buffer blocks. So, it's better to split the relation in repeated passes. In one pass, we can split the input as several partitions because there are sufficient blocks available to be used as output buffers. Each bucket build by the pass is read separately and further partitioned in the next pass so as to create smaller partitions. Also, the hash functions are different in different passes. So, it is better to use recursive partitioning for handling such cases.

Overflows in Hash Join

The overflow condition in hash-table occurs in any partition i of the build relation s due to the following cases:

Case 1: When the hash index on si is greater than the main memory, the overflow condition occurs.

Case 2: When there are multiple tuples in the build relation with the same values for the join attributes.

Case 3: When the hash function does not hold randomness and uniformity characteristics.

Case 4: When some of the partitions have more tuples than the average and others have fewer tuples, then such type of partitioning is known as skewed.

Handling the Overflows

We can handle such cases of hash-table overflows using various methods.

  • Using Fudge Factor

We can handle a small amount of skew by increasing the number of partitions with the use of the fudge factor. The fudge factor is a small value that increases the number of partitions. So, it will help to reduce the expected size of each partition, including their hash index less than the memory size. Unfortunately, the use of a fudge factor makes the user conservative on the size of the partitions. Thus, the chances of overflow are still possible. However, the use of the fudge factor is suitable for handling small overflows, but it is not sufficient for handling large overflows in the hash-table.

As a result, we have two more methods for handling the overflows.

Hash Join Algorithm

1. Overflow Resolution

The overflow resolution method is applied during the build phase when a hash index overflow is detected. The overflow resolution works in the following way:

It finds si for any partition i if having size larger than the memory size. It again partitions such build relation si into smaller partitions through a different hash function. Similarly, it partitions the probe relation ri through the new hash function, and only those tuples are joined, which are having matching partitions. But, it is a less careful approach because this method waits for such conditions to occur, and then take the necessary actions to resolve the problem.

2. Overflow Avoidance

The overflow avoidance method uses a careful approach while partitioning in order to avoid the occurrence of overflow in the build phase. The overflow avoidance works in the following way:

It initially partitions the build relation s into several small partitions and then combines some of the partitions. These partitions are combined in such a way that each combined partition fits in the memory. Similarly, it partitions the probe relation r as the combined partitions on s. But, the size of ri does not matter in this method.

Both overflow resolution and overflow avoidance methods may fail on some partitions if a large number of tuples in s have the same value for the join attributes. In such a case, it is better to use block nested-loop join rather than applying the hash join technique for completing the join operation on those partitions.

Cost Analysis of Hash Join

For analyzing the cost of a hash join, we consider that no overflow occurs in the hash join. We will consider only two cases where:

1. Recursive partitioning is not needed

We need to read and write relations r and s completely for partitioning them. For this, a total of 2(b r + b s ) block transfers are required. The term b r and b s are the number of blocks holding records of relations r and s. Both relations read each partition once for more br + bs blocks transfers. However, the partitions might have occupied slightly more number of blocks than br + bs, which results in partially filled blocks. To access such partially filled blocks can include the overhead of 2nh approximately for each relation. Thus, a hash join cost estimates need:

Number of block transfers = 3(br + bs) + 4nh

Here, we can neglect the overhead value of 4nh since it is much smaller than br + bs value.

Number of disk seeks = 2(Γbr/bbꓶ + Γbs/bbꓶ) + 2nh

Here, we have assumed that each input buffers are allocated with bb blocks, and the build, as well as probe phase, needs only one seek for each nh partition of the relation, as we can read each partition sequentially.

2. Recursive partition is required

In this case, each pass reduces the size of each partition by M-1 expected factor, and also passes are repeated until it makes the size of each partition as M blocks at most. Therefore, for partitioning the relation s, we need:

Number of passes = ΓlogM-1(bs) - 1ꓶ

The number of passes required in the partitioning of the build and probe relations is the same. As in each pass, each block of s is read and written out and needs a total of 2bsΓlogM-1(bs) - 1ꓶ block transfers for splitting relation s. Thus, a hash join cost estimates need:

Number of block transfers = 2(br + bs)ΓlogM-1(bs) - 1ꓶ + br + bs

Number of disk seeks = 2(Γbr/bbꓶ + Γbs/bbꓶ)ΓlogM-1(bs) - 1ꓶ

Here, we assume that for buffering each partition we allocate bb blocks to them. Also, we have neglected a relatively small number of seeks during the build and probe phase.

As a result, the hash join algorithm can be further improved if the size of the main memory increases or is large.

Hybrid Hash Join

It is a type of hash join that is useful for performing the join operations in which the memory size is relatively large. But still, the build relation does not fit in the memory completely. So, the hybrid hash join algorithm resolves the drawback of the hash join algorithm.






Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA