# What is Buddy System?

The two smaller parts of the block are of equal size and called buddies. The buddy system is a procedure in which two individual buddies operate together as a single unit so that they can monitor and help each other. Similarly, one of the two buddies will further divide into smaller parts until the request is fulfilled.

Merriam-Webster is the first known use of the phrase buddy system in 1942. According to Webster buddy system is an arrangement in which two individuals are paired (as for mutual safety in a hazardous situation).

The buddy system is basically working together in pairs in a large group or alone. Both the individuals have to do the job. The job could ensure that the work is finished safely or transferred effectively from one individual to the other.

### Types of Buddy System

There are the following four types of Buddy systems.

1. Binary Buddy System

The buddy system maintains a list of the free blocks of each size (called a free list) so that it is easy to find a block of the desired size if one is available. If no block of the requested size is available, Allocate searches for the first non-empty list for blocks of atleast the size requested. In either case, a block is removed from the free list.

For example, suppose the size of the memory segment is initially 256kb, and the kernel requests 25kb of memory. The segment is initially divided into two buddies. Let's say A1 and A2, each 128kb in size. One of these buddies is further divided into two 64kb buddies, B1 and B2. But the next highest power of 25kb is 32kb so, either B1 or B2 is further divided into two 32kb buddies (C1 and C2), and finally, one of these buddies is used to satisfy the 25kb request. A split block can only be merged with its unique buddy block, which then reforms the larger block they were split from.

2. Fibonacci Buddy System

A Fibonacci buddy system is a system in which blocks are divided into sizes which are Fibonacci numbers. It satisfies the following relation:

Zi = Z(i-1)+Z(i-2)

The original procedure for the Fibonacci buddy system was either limited to a small, fixed number of block sizes or a time-consuming computation.

3. Weighted Buddy System

The weighted buddy system is similar to the original buddy system. The large blocks are split iteratively to provide the desired smaller blocks in this system. When blocks are released, they are combined with their buddy if the buddy is available or, failing this, are attached to an available space list. The binary and weighted buddy system has the following similarities.

• The ease of calculating the address of the buddy of a block and giving the block's address. The address calculation for the binary and weighted buddy systems is straight forward.
• And the allocation of blocks from an available space list.

4. Tertiary Buddy System

The tertiary buddy system allows block sizes of 2k and 3.2 k, whereas the original binary buddy system allows only block sizes of 2k. This extension is achieved at an additional cost of two bits per block. Simulation of the proposed algorithm has been implemented in the C programming language.

The performance analysis in terms of internal fragmentation for the tertiary buddy system with other existing schemes such as the binary buddy system, Fibonacci buddy system, and the weighted buddy system is given in this work. Further, the comparison of simulation results for several splits and the average number of merges for the above systems is also discussed.

### Buddy System Memory Allocation Technique

The buddy system memory allocation technique is an algorithm that divides memory into partitions to satisfy a memory request as suitably as possible. This system uses splitting memory into half to give the best fit. The Buddy memory allocation is relatively easy to implement. It supports limited but efficient splitting and merging of memory blocks.

### Algorithm of Buddy Memory Allocation

There are various forms of the buddy system in which each block is subdivided into two smaller blocks are the simplest and most common variety. Every memory block in this system has an order, where the order is an integer ranging from 0 to a specified upper limit. The size of a block of order n is proportional to 2n so that the blocks are exactly twice the size of blocks that are one order lower. Power of two block sizes makes address computation simple because all buddies are aligned on memory address boundaries that are powers of two.

• When a larger block is split, it is divided into two smaller blocks, and each smaller block becomes a unique buddy to the other. A split block can only be merged with its unique buddy block, which then reforms the larger block they were split from.
• The size of the smallest possible block is determined, i.e., the smallest memory block that can be allocated. If no lower limit existed (e.g., bit-sized allocations were possible), there would be a lot of memory and computational overhead for the system to track which parts of the memory are allocated and unallocated.
• However, a rather low limit may be desirable so that the average memory waste per allocation (concerning allocations that are not multiples of the smallest block in size) is minimized.
• Typically the lower limit would be small enough to minimize the average wasted space per allocation but large enough to avoid excessive overhead. The smallest block size is then taken as the size of an order-0 block so that all higher orders are expressed as power-of-two multiples of this size.
• The programmer then has to decide or write code to fit in the remaining available memory space to obtain the highest possible order. Since the total available memory in a given computer system may not be a power-of-two multiple of the minimum block size, the largest block size may not span the system's entire memory.

For example, if the system had 2000 K of physical memory and the order-0 block size was 4 K, the upper limit on the order would be 8 since an order-8 block (256 order-0 blocks, 1024 K) is the biggest block that will fit in memory. Consequently, it is impossible to allocate the entire physical memory in a single chunk; the remaining 976 K of memory would have to be allocated in smaller blocks.

Buddy system allocation has the following advantages, such as:

• In comparison to other simpler techniques, the buddy memory system has little external fragmentation.
• The buddy memory allocation system is implemented using a binary tree to represent used or unused split memory blocks.
• Allocates a block of the correct size.
• The buddy system is very fast to allocate or deallocate memory.
• In buddy systems, the cost to allocate and free a block of memory is low compared to that of best-fit or first-fit algorithms.
• It is easy to merge adjacent holes.
• Another main advantage is coalescing. It is defined as how quickly adjacent buddies can be combined to form larger segments. This is known as coalescing.

It also has some disadvantages, such as:

• It requires all allocation units to be powers of two.
• It leads to internal fragmentation.

### Example of Buddy System Memory Allocation

The following is an example of what happens when a program makes requests for memory. Suppose in this system, the smallest possible block is 64 kilobytes in size, and the upper limit for the order is 4, which results in the largest possible allocatable block, 24 times 64 K = 1024 K in size. The following image shows a possible state of the system after various memory requests.

This memory allocation could have occurred in the following manner.

Step 1: This is the initial situation.

Step 2: Program A requests memory 34 K, order 0.

• No order 0 block is available, so an order 4 blocks is split, creating two order 3 blocks.
• Still, no order 0 blocks available, so the first order 3 block is split, creating two order 2 blocks.
• Still, no order 0 blocks available, so the first order 2 block is split, creating two order 1 blocks.
• Still, no order 0 block is available, so the first order 1 block is split, creating two order 0 blocks.
• Now an order 0 block is available, so it is allocated to A.

Step 3: Program B requests memory 66 K, order 1. An order 1 block is available, so it is allocated to B.

Step 4: Program C requests memory 35 K, order 0. An order 0 block is available, so it is allocated to C.

Step 5: Program D requests memory 67 K, order 1.

• No order 1 blocks are available, so an order 2 block is split, creating two order 1 blocks.
• Now an order 1 block is available, so it is allocated to D.

Step 6: Program B releases its memory, freeing one order 1 block.

Step 7: Program D releases its memory.

• One order 1 block is freed.
• Since the buddy block of the newly freed block is also free, the two are merged into one order 2 block.

Step 8: Program A releases its memory, freeing one order 0 block.

Step 9: Program C releases its memory.

• One order 0 block is freed.
• Since the buddy block of the newly freed block is also free, the two are merged into one order 1 block.
• Since the buddy block of the newly formed order 1 block is also free, the two are merged into one order 2 block.
• Since the buddy block of the newly formed order 2 block is also free, the two are merged into one order 3 block.
• Since the buddy block of the newly formed order 3 block is also free, the two are merged into one order 4 block.

As you can see in the above steps, what happens when a memory request is made is as follows:

1. If memory is to be allocated.
1. Look for a memory slot of a suitable size (the minimal 2kblock that is larger or equal to that of the requested memory).
2. If it is found, it is allocated to the program.
3. If it is not found, it tries to make a suitable memory slot. The system does so by trying the following:
• Split a free memory slot larger than the requested memory size into half.
• If the lower limit is reached, then allocate that amount of memory.
• Go back to look for a memory slot of a suitable size (step I).
• Repeat this process until a suitable memory slot is found.
2. If memory is to be free.
1. Free the block of memory.
2. Look at the neighboring block.
3. If it is free, combine the two, and go back to step II, and repeat this process until either the upper limit is reached (all memory is freed) or a non-free neighbor block is encountered.