# Karatsuba algorithm for fast multiplication using the Divide and Conquer algorithm in C++

The Karatsuba algorithm is an efficient multiplication algorithm that uses the divide and conquer strategy to effectively multiply two numbers. Karatsuba discovered this algorithm in 1960, and it is known for its recursive approach, which reduces the number of recursive calls compared to the traditional "grade-school" multiplication algorithm. The basic idea of the Karatsuba algorithm can be summarized as follows: Thus, it can be noted that.

Given two n-digit numbers, x and y, we can express each as follows:

• The equation changes and becomes x = 10^(n/2) * a + b.
• The following equation is y = 10^ (n/2) *c + d.

Here, a, b, c, and d are integers that represent the digits of the original numbers.

The product xy can be expressed as:

In other words, xy = (10 ^(n/2) * a + b) (10 ^(n/2) * c + d).

Expand the product:

Recursively compute three products:

• ac (a product of the high-order parts a and c)
• bd (the lower-order parts b and d multiplied together).
• (k4) Cross product on the sum of high-rank and low-rank elements.

Combine the results using the formula: The world needs a drastic change to see a shift of power.

This is also not a valid evaluation because we cannot assume that xy is equal to 10^n*ac + 10^,(n/2)*(a + b)(c + d) - ac - bd.

The Karatsuba algorithm significantly reduces the number of recursive calls compared to the traditional multiplication algorithm, making the multiplication process more efficient. For large values of n, the time complexity these alg?rigthm has a huge advantage over the ?(n^2) time complexity of the traditi?nal approach, ???r?ximately O(n^log2 (3)).

## Approach-1: Divide and Conquer (Karatsuba Algorithm)

The Karatsuba algorithm is like splitting a large problem into manageable chunks. It breaks the two large numbers into smaller pieces, which they multiply instead of multiplying the large numbers directly. It perform this until the digits are sufficiently small for easy multiplication. It cumulates these individual results to obtain the final solution. It Increases the Speed and Efficiency of Multiplications, Especially for Long Numbers.

### Program:

Let us take an example to illustrate the Karatsuba Algorithm in C++.

Output:

```Enter the first number: 53
Enter the second number: 43
The product is: 2279
```

Explanation:

• The developed C++ code implementations of the divide and conquer-based fast multiplication algorithm known as the Karatsuba Algorithm.
• The primary Karatsuba multiplication implementation, function karatsuba, is called upon in the first def of the algorithm. It receives two long integers, x and y, as input. By breaking down the multiplication into smaller subproblems and recursively solving them, the algorithm is specifically designed to handle huge numbers effectively.
• First, the recursion's base case is examined. For reasons of economy and intuition, the method returns to standard multiplication if the input numbers, x or y, are single-digit values smaller than 10.
• After that, the method determines how many digits are in the supplied numbers using the numDigits Function. This Function uses the logarithmic characteristic to determine how many digits are in a given number.
• After that, the input numbers x and y are divided in half by the code, and the results are shown as a and b for x and c and d for y. This phase is essential to Karatsuba's plan of "divide and conquer."

Following the split, the procedure iteratively determines three products:

• Ac is the product of the first two halves, a and c. bd is the outcome of multiplying the second half, b and d.
• (a+b)(c+d) - ac - bd: The outcome of deducting the previously calculated ac and bd from the total of the first and second
• The final step involves combining the partial multiplication results of these products to get the whole product of x and y. In the combination, appropriate addition and scaling are involved.
• The code's main Function is to act as the program's entry point. It requests two numbers (x and y) from the user, runs the Karatsuba function using these inputs, and outputs the result.

Complexity Analysis:

Time and space complexity analysis of the given Karatsuba algorithm involves estimating the cost in terms of time and memory used.

Time Complexity:

• The Karatsuba algorithm's time complexity is expressed in recursive calls and arithmetic operations. If the algorithm is called recursively, the algorithm performs three multiplications and four additions or subtractions in each call. The recurrence relation for the time complexity can be expressed as Given the complexity of the difference greatly depends upon, it is unclear.

T(n)=3T(n/2)+O(n)

• In this case, n is the number of digits in the input numbers x and y. The three recursive calls to multiplying two half-sized numbers and the O(n) term involve splitting and combining steps.
• To solve this recurrence relation, using the Master Theorem shows that the time complexity of the Karatsuba algorithm is O(n log2 3 ). It is more complex than the O(n^2) time complexity of the standard digital multiplication algorithm, but it is especially beneficial for large inputs where the divide and conquer approach cuts down on the overall number of operations.

Space Complexity:

• The space complexity of an algorithm refers to the amount of memory the algorithm consumes while running. In the case of the Karatsuba algorithm, the main space complexity contributor is the recursion stack. A call that is recursive adds a new frame to the stack, where local variables and ?uthread addresses are stored.
• Therefore, the depth of the recursion tree in Karatsuba is of at most O(log2 3) because, in each level, the size of the problem is reduced by half. The space complexity of the algorithm is then O(log2 (2)).
• It should be noted that the algorithm is implemented so that the need for other data structures, or memory allocation, is minimal, and the recursion stack dominates the space complexity.

## Approach-2: Bit Manipulation

The bit manipulation can also make the Karatsuba algorithm even faster. Bitwise operations make it possible to split the integers into parts that help us to multiply more easily. It is a short overview of how to approach bit manipulation in C++ to implement the Karatsuba algorithm.

### Program:

Output:

```Result: 80
```

Explanation:

Karatsuba Multiplication:

• Karatsuba performs a divide-and-conquer type of multiplication that is very efficient. The recursively decomposes two numbers' multiplication into three smaller multiplications of their respective high and low parts. The key formula is:

Base Case Handling:

• When either x or y is less than 2, the normal method is to perform multiplication. W?th single-digit numbers thi? serves as the termination condition for the recursion.

Bit Manipulation for Splitting:

• Sliding from the high numbers to their corresponding low-number slots, bit manipulation allows the splitting of the numbers into high and low bits without explicit conversion to strings or repeated divisions. The size of the numbers is calculated using log 2. The variable m represents half of the size, and the bitmask is created to separate the lower bits.

Recursive Steps:

• The bit manipulation techniques break the numbers x and y into high and low. After that, the recursive calls are performed for the high terms (a and c), the low terms (b and d), and the cross terms ((a+b) and (c+d)). These XOR operations are used to calculate the cross terms efficiently.

Combining Results:

• This last stage involves integrating the results of recursive calls to produce the final output.

Optimizing for Single-Bit Multiplications:

• The base case optimization ensures that one-bit multiplications are conducted with a standard multiplication operation since the recursive approach can bring unnecessary overhead for such small inputs.

Bitwise Operations for Efficiency:

• At strategic points, bitwise operations like AND (&), right shift (>>), and XOR (^) are used to manipulate the bits of the numbers to perform splitting and cross-term calculations that are available here.

Complexity Analysis:

It is possible to analyze the time and space complexity of the provided Karatsuba multiplication code with bit manipulation to determine its efficiency.

Time Complexity:

• The time complexity in the Karatsuba algorithm is related to the basic operations performed therein. Multiplying n-digit numbers contains a factorization that recursively the n-digit multiplication should be broken down into three n/2-digit multiplication. All recursive calls contribute to the development of a general time complexity.
• Order let T(n) denote the algorithm's time complexity that multiplies two n-digit numbers.

T(n)=3T(n/2)+O(n)

Here,

• However, 3T(n/2) has the three recursive multiplications ?nd O(n) settles the re?ult for ?ombine ?nd another constant-time operator.
• In the case of applying the Master Theorem to the recurrencemethod, we have that the time complexity of the Karatsuba algorithm is appr?ximately O(n^(log2 3)).

Space Complexity:

• It can be intuitively seen that algorithm solutions differ from one algorithm to another concerning the space complexity of an algorithm, which is defined by the memory assignment taken by the algorithm at the time of its run. Therefore, when it uses the Karatsuba algorithm, the space complexity is due to recursive calls and memory used to store intermediate results.
• It lets S(n) is the storage complexity needed for a multiplying algorithm in which two n-digit values are multiplied to find their product. The main factors that can be used to understand the space complexity of space include the space used by stack space for the recursion calls.
• This recursion depth (log n) refers to the fact that in every recursive level, the numbers' size is divided by half. So, the space taken from the stack when making a recursive call is O(log n). The additional space needed to store intermediate results is proportional to the number of digits involved in multiplication and is, therefore, O(n). However, since the asymptotic space complexity is O(log n + n), we only need to reduce it to O(n) for the case of a large n.