Javatpoint Logo
Javatpoint Logo

Genetic Algorithm in Soft Computing

A genetic algorithm (GA), which is a subset of the larger class of evolutionary algorithms (EA), is a metaheuristic used in computer science and operations research that draws inspiration from the process of natural selection. Genetic algorithms frequently employ biologically inspired operators, including mutation, crossover, and selection, to produce high-quality solutions to optimization and search problems. Optimization of decision trees for improved performance, resolving sudoku puzzles, hyperparameter optimization, causal inference, etc., are a few examples of GA applications.

Methodology

Issues with optimization

In a genetic algorithm, a population of potential solutions to an optimization issue (people, creatures, organisms, or phenotypes) evolves toward superior solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, although other encodings are also feasible. Each candidate solution has a set of properties (its chromosomes or genotype) that can be changed and modified.

A generation is a term used to describe the population in each iteration of the evolution, which typically begins with a population of randomly generated individuals. Every member of the population has their fitness assessed once every generation; the fitness is typically the value of the objective function in the optimization issue being addressed. A new generation is created by stochastically selecting the fittest people from the current population, recombining their genomes, and introducing random mutations. The following algorithm iteration uses the fresh generation of candidate solutions. The algorithm typically ends when the population has reached a desirable fitness level or the maximum number of generations has been produced.

A conventional genetic algorithm must represent the solution domain genetically and be evaluated using a fitness function.

Each potential answer is typically represented as a collection of bits, also known as a bit set or string. The use of arrays of various types and structures is fundamentally the same. The fundamental benefit of these genetic representations is that their fixed size makes their pieces simple to align and allows for straightforward crossover procedures. It is also possible to employ variable-length representations, but crossover implementation is more difficult in this situation. In gene expression programming, a combination of both linear chromosomes and trees is examined. Genetic programming explores representations in the form of trees, while evolutionary programming explores representations in graphs.

A GA initiates a population of solutions after defining the genetic representation and the fitness function and then improves it by repeatedly using the mutation, crossover, inversion, and selection operations.

Initialization

Depending on the nature of the issue, the population can range from a few hundred to thousands of potential solutions. The initial population is frequently produced randomly, allowing for the complete set of potential answers (the search space). Occasionally, the answers may be "seeded" in regions where the best answers are most likely to be found.

Selection

A percentage of the current population is chosen to reproduce for a new generation during each succeeding generation. A fitness-based approach is used to choose individual solutions, with fitter solutions (as determined by a fitness function) often having a higher chance of being chosen. Some selection methods evaluate each solution's fitness and prefer the best ones. Other techniques merely rate a representative population sample because the initial procedure could take an extended period.

The fitness function, defined over the genetic representation, assesses the effectiveness of the represented solution. The fitness function always depends on the problem. In the knapsack issue, for instance, the goal is to maximize the overall worth of the items that can fit within a rucksack with a specific capacity. An array of bits could represent a solution, with each bit representing a separate object and its value (0 or 1), indicating whether or not the object is in the knapsack. Such representations are only sometimes accurate since items' sizes sometimes surpass the knapsack's storage capacity.

When it is difficult or even impossible to define the fitness expression for a problem, a simulation or even interactive genetic algorithms may be used to estimate the fitness function value of a phenotype (for example, computational fluid dynamics is used to estimate the air resistance of a vehicle whose shape is encoded as the phenotype).

Genetic modifiers

By combining the genetic operator's crossover (also known as recombination) and mutation, the next step is to produce a second-generation population of solutions from the initially chosen ones.

A pair of "parent" solutions are chosen from the pool previously chosen to breed to produce each new solution. By employing the aforementioned crossover and mutation techniques to construct a "child" solution, a new solution is created that often shares many traits with its "parents." For every new child, new parents are chosen, and this process is repeated until a fresh population of solutions is produced that is of the correct size. Some study reveals that more than two "parents" produce superior quality chromosomes, even though reproduction techniques based on the utilization of two parents are more "biology inspired."

These activities ultimately lead to a population of chromosomes in the subsequent generation that differs from the first generation. Since only the best creatures from the first generation are chosen for breeding, along with a small percentage of less fit solutions, the population's average fitness should have grown due to this operation. These less effective approaches guarantee genetic variation within the parental genetic pool and, consequently, guarantee the genetic diversity of the following generation of children.

The relative role of crossover and mutation is controversial. The importance of mutation-based search is supported by numerous references in Fogel (2006).

Although the two most common genetic operators are crossover and mutation, genetic algorithms can also make use of regrouping, colonization-extinction, or migration.

To discover appropriate settings for the issue class under consideration, tweaking variables like the mutation probability, crossover probability, and population size is worthwhile. The non-ergodic phenomenon of genetic drift may result from a relatively low mutation rate. The genetic algorithm may not fully converge if the recombination rate is too high. If elitist selection is not used, a high mutation rate could result in the loss of good solutions.

An appropriate population size ensures enough genetic variety for the issue at hand. Still, if set to a value greater than necessary, it can waste computational resources.

Heuristics

Other heuristics may be used in addition to the significant operators mentioned above to speed up or strengthen the calculation. The speciation heuristic discourages population homogeneity and delays convergence to a less ideal solution by penalizing crossover between candidate solutions that are too similar.

Termination

Up until a termination condition is met, this generational process is repeated. Typical termination circumstances include:

  • A solution is discovered that meets the minimal requirements
  • The highest-ranked solution's fitness is approaching or has achieved a plateau, meaning that subsequent iterations no longer provide better results.
  • Manual inspection. Combinations of those, as mentioned earlier. Fixed number of generations. Allocated budget (computation time/money).

The theory of the building blocks

Although it is easy to create genetic algorithms, it is challenging to comprehend their behavior. It is particularly challenging to comprehend why these algorithms frequently produce highly fit answers when used to solve real-world issues. The components of the building block hypothesis (BBH) are:

  1. An explanation of a heuristic for adapting by locating and recombining "building blocks," or low order, short defining-length schemata with above average fitness.
  2. The idea is that a genetic algorithm achieves adaptation by effectively and implicitly putting this heuristic into practice.

According to Goldberg, short, low-order, and highly fit schemata are sampled, recombined [crossed over], and resampled to generate strings of possibly higher fitness. In a sense, the complexity of our problem has been decreased by using these specific schemata [the building blocks]; rather than creating high-performance strings by testing every possible combination, we create better and better strings using the best partial answers from earlier samplings.

We have already given them a particular name: building blocks. This is due to the significant role that highly fit schemata with short defining lengths and low order play in the operation of genetic algorithms. A genetic algorithm seeks near-optimal performance by juxtaposing short, low-order, high-performance schemata or building blocks, just as a toddler builds fantastic fortresses from basic wooden blocks.

The building-block hypothesis has been repeatedly assessed and utilized as a reference throughout the years, despite the need for more agreement regarding its validity. For instance, numerous estimations of distribution procedures have been put out to create a setting where the hypothesis would hold. Although promising outcomes for some classes of issues have been published, doubts about the generality and viability of the building-block hypothesis as a justification for the effectiveness of GAs persist. Much research has been done to understand its limitations from the standpoint of distribution algorithm estimation.

Limitations

When compared to other optimization algorithms, using a genetic algorithm has some drawbacks:

  • For complicated situations, repeated fitness function evaluation is frequently the most challenging and constrained part of artificial evolutionary algorithms. Complex high-dimensional, multimodal problems frequently call for highly pricey fitness function analyses in order to find the best solution. In practical issues like structural optimization, evaluating a single function may take many hours to several days of exhaustive simulation. Standard optimization techniques cannot solve such problems. It could be required in this situation to forsake an accurate evaluation in favor of a computationally effective approximation fitness measurement. Combining approximation models may be one of the most promising methods for successfully applying GA to complex real-world issues.
  • Genetic algorithms struggle to increase in complexity. That is, the search space size frequently grows exponentially in regions where the number of elements subject to mutation is high. Because of this, applying the approach to issues like creating an engine, a home, or a plane is incredibly challenging. Such issues must be reduced to the most basic form to be tractable by evolutionary search. As a result, we frequently find evolutionary algorithms encoding ideas for fan blades rather than engines, building forms rather than precise construction blueprints, and airfoils rather than complete aircraft designs. The second complexity problem is preventing parts that have evolved to represent reasonable solutions from further destructive mutation, especially when their fitness assessment requires them to combine well with other parts.
  • The "better" solution is only in comparison to other solutions. Because of this, the stop condition is not always apparent.
  • Rather than obtaining the problem's overall optimum, GAs sometimes gravitate towards local optimums or arbitrary places. This indicates that it lacks the "know-how" to forgo immediate fitness in favor of long-term fitness. The chance of this relies on the fitness landscape's form; specific difficulties could make it simple to reach a global optimum, while others would make it simpler for the function to locate local optimum points. Although the No Free Lunch theorem shows that there is no universal solution to this problem, it can be solved by adopting a different fitness function, speeding up a mutation, or utilizing selection methods that maintain a diversified population of solutions. A "niche penalty" is a common strategy to preserve diversity. It involves adding a penalty to any group of people who are sufficiently similar to one another (the niche radius), which reduces their representation in subsequent generations and allows other (less similar) people to remain in the population. Nevertheless, depending on the nature of the issue, this tactic might not work. When the majority of the population is too similar to one another, another strategy is to replace a portion of the population with randomly produced individuals. Genetic algorithms (and genetic programming) require diversity since a homogenous population does not produce novel solutions when it is crossed. Diversity is unnecessary in evolutionary programming or methods since mutation is more frequently used.
  • It is challenging to operate on dynamic data sets because early-stage genomic convergence leads to conclusions that might not hold for subsequent data. It has been suggested that this can be fixed by increasing genetic diversity and delaying early convergence, either by increasing the likelihood of mutation when the quality of the solution declines (a process known as triggered hypermutation) or by sporadically introducing completely new, randomly generated elements into the gene pool (a process known as random immigrants). Once more, evolutionary programming and tactics may be implemented using a so-called "comma strategy" in which new parents are solely chosen from offspring, and existing parents are not retained. On difficulties with motion, this could work better.
  • Because there is no mechanism to converge on the answer (there is no hill to climb), GAs cannot successfully solve issues when the sole fitness measure is a single right/wrong measure (such as choice problems). In certain situations, a random search could be just as quick to uncover a solution. However, the ratio of successes to failures is a sufficient fitness metric if the circumstance permits the success/failure experiment to be repeated with (potentially) different results.
  • Other optimization methods may be more effective than genetic algorithms in terms of convergence speed for particular optimization issues and problem cases. Alternative and supplementary algorithms include approaches based on integer linear programming, ant colony optimization, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g., ant colony optimization, particle swarm optimization). The degree of issue understanding affects whether genetic algorithms are appropriate; well-known problems frequently have superior, specialized techniques.

Variants

Depiction of the chromosomes

Each chromosome is represented as a bit string by the most basic approach. Although it is possible to employ floating point representations, integers may often be used to represent numeric parameters. Both evolutionary programming and evolution techniques are logical fits for the floating point representation. Real-valued genetic algorithms have been suggested, although this is a misnomer because it needs to accurately reflect John Henry Holland's building block theory from the 1970s. However, based on theoretical and experimental findings (see below), some evidence supports this notion.

At the bit level, the fundamental algorithm executes crossover and mutation. Other variations view the chromosome as a collection of integers representing hashes, objects, nodes in a linked list, indices into instruction tables, or any other type of data structure. Data element boundaries are respected when performing crossover and mutation. Specific variation operators may be created for the majority of data types. For many specialized issue categories, various chromosomal data types appear to perform better or worse.

Grey coding is frequently utilized when bit-string representations of numbers are used. This allows for easy modification of the integer through mutations or crossings. It has been shown that doing this can prevent premature convergence at so-called Hamming barriers, where too many simultaneous mutations (or crossover events) must occur to alter the chromosome to a more advantageous state.

Other methods portray chromosomes as arrays of real-valued integers instead of bit strings. The idea of schemata suggests that, in general, the smaller the alphabet, the better the performance; yet, the fact that real-valued chromosomes produced good results at first surprised researchers. When selection and recombination are dominant, it was stated that the set of fundamental values in a finite population of chromosomes forms a virtual alphabet with a substantially smaller cardinality than would be predicted from a floating point representation.

By concatenating many types of heterogeneously encoded genes into one chromosome, it is possible to expand the area where the Genetic Algorithm is accessible. This specific method enables the solution of optimization issues when the defining domains for the problem parameters are diverse. For example, in cascaded controller tuning issues, the external loop might implement a linguistic controller (like a fuzzy system) with an essentially distinct description. In contrast, the internal loop controller structure could correspond to a standard regulator of three parameters. The study and modeling of complex adaptive systems, notably evolution processes, benefit from this specific type of encoding, which necessitates a specialized crossover mechanism that recombines the chromosome by section.

Elitism

The best organism(s) from the current generation can be passed down to the following generation unchanged as a practical variation of the basic process of creating a new population. This tactic, referred to as elitist selection, ensures that the quality of the solutions the GA obtains won't deteriorate from one generation to the next.

Concurrent applications

There are two types of parallel genetic algorithm implementations. Parallel evolutionary algorithms that are coarse-grained presume that there is a population on each computer node and that people move across the nodes. The assumption in fine-grained parallel genetic algorithms is that each processor node has an individual that interacts with its neighbors for selection and reproduction. Other variations include time dependence or noise into the fitness function, such as genetic algorithms for online optimization issues.

Adjustable Gas

Another noteworthy and exciting variation of genetic algorithms is those with adaptable parameters (adaptive genetic algorithms, or AGAs). Depending on the crossover (pc) and mutation (pm) probability, genetic algorithms can achieve varying degrees of solution accuracy and convergence speed. Researchers have done an analytical analysis of GA convergence.

In order to preserve the population variability as well as the capability for convergence, AGAs use the population information from each generation rather than employing preset values for pc and pm. The adaptation of pc and pm in an AGA (adaptive genetic algorithm) depends on the fitness values of the solutions. Additional illustrations of AGA versions include: A simple, early example of enhancing convergence is the successive zooming approach. The adjustment of pc and pm in CAGA (clustering-based adaptive genetic algorithm) depends on the population's optimization stages, which are assessed via clustering analysis. Abstract variables are used to determine pc and pm in more recent methods. Examples include the dominance and co-dominance principles and LIGA (levelized interpolative genetic algorithm), which addresses search space anisotropy by combining a flexible GA with a modified A* search.

Combining GA with other optimization techniques has the potential to be very successful. A GA is highly effective at locating generally sound global solutions but ineffective at locating the final few mutations necessary to locate the precise optimum. Other methods (such as straightforward hill climbing) are highly effective in locating the absolute optimal in a constrained area. Combining hill climbing with GA can increase the resilience of hill climbing while enhancing the effectiveness of GA.

This implies that in the natural scenario, the principles of genetic variation may signify something different. For instance, crossing over may sum up several maternal DNA steps, add several paternal DNA steps, and so on, assuming the steps are kept in consecutive sequences. This is comparable to adding vectors to the phenotypic landscape that are more likely to follow a ridge. As a result, the process efficiency may be improved by many orders of magnitude. The inversion operator also has the option to arrange the stages in any other sensible sequence that may increase their chances of survival or increase their efficiency.

Gene pool recombination is a variation in which the population evolves instead of its individuals.

Several modifications have been created to enhance the performance of GAs on problems with a high degree of fitness epistasis-that is, issues where a solution's fitness is determined by interacting subsets of its variables. These algorithms seek to understand (before using) these advantageous phenotypic connections. As a result, they support the Building Block Hypothesis by adaptively minimizing disruptive recombination. A few notable instances of this strategy include the mGA, GEMGA, and LLGA.

Problematic areas

Timetabling and scheduling issues are among the issues that genetic algorithms are particularly well suited to solve, and many scheduling software solutions are GA-based. Engineers have also used GAs in their work. Global optimization challenges are frequently solved using genetic algorithms.

Problem domains with a complicated fitness landscape may benefit from genetic algorithms because mixing, or mutation combined with crossover, is intended to shift the population away from local optima that a standard hill climbing algorithm could become trapped in. Keep in mind that crossover operators, which are frequently utilized, cannot alter any uniform population. Ergodicity of the entire genetic algorithm process (viewed as a Markov chain) may be achieved just through mutation.

Examples of issues that evolutionary algorithms have resolved include the best design of aerodynamic bodies in complex flowfields, walking procedures for computer figures, and antennas intended to take up radio transmissions in space.

Skiena cautions against using evolutionary algorithms for any purpose in his Algorithm Design Manual:

Modeling applications using genetic operators like mutation and crossover on bit strings is extremely strange. Pseudobiology adds a further layer of intricacy that separates you and your issue. Second, applying genetic algorithms to complex problems takes a very long time. The evolutionary analogy is a good one because it takes millions of years for meaningful development to occur.

Genetic algorithms have never been the best approach to a problem when I have met it. In addition, I have never encountered any computational outcomes of genetic algorithms that have positively pleased me. For all of your heuristic search magic requirements, stick to simulated annealing.

  • Stephen Skiena

History

Alan Turing suggested a "learning machine" that would mimic the processes of evolution in 1950. Nils Aall Barricelli, who was utilizing the computer at the Institute for Advanced Study in Princeton, New Jersey, pioneered the use of computers to simulate evolution in 1954. His 1954 publication received little attention. A series of publications by Australian quantitative geneticist Alex Fraser on simulating the artificial selection of animals with many loci affecting quantifiable traits began to appear in 1957. From these beginnings, computer modeling of evolution by biologists increased in the early 1960s. Fraser and Burnell (1970) and Crosby (1973) published books that outlined the techniques. The simulations of Fraser comprised all the fundamental components of contemporary genetic algorithms. Hans-Joachim Bremermann also used a population of solutions to optimization issues in several publications he wrote in the 1960s, which underwent recombination, mutation, and selection. Modern genetic algorithms were also a part of Bremermann's study. Richard Friedberg, George Friedman, and Michael Conrad are notable early pioneers. Fogel (1998) has reproduced a lot of early publications.

Although Barricelli had simulated the evolution of skill in a straightforward game in work published in 1963, artificial evolution became a commonly used optimization technique in the 1960s and early 1970s, thanks to the work of Ingo Rechenberg and Hans-Paul Schwefel. Rechenberg's team used evolutionary techniques to find solutions to challenging technical issues. Another strategy was Lawrence J. Fogel's evolutionary programming method, which was suggested for creating artificial intelligence. Initially, finite state machines were employed in evolutionary programming to anticipate environments, and variation and selection were used to improve the predictive logic. Through the work of John Holland in the early 1970s, notably his book Adaptation in Natural and Artificial Systems (1975), genetic algorithms, in particular, gained popularity.

His research began with cellular automata experiments by Holland and his University of Michigan pupils. Holland's Schema Theorem, which he established, is a formalized framework for projecting the quality of the following generation. Up to The First International Conference on Genetic Algorithms, which took place in Pittsburgh, Pennsylvania, in the middle of the 1980s, research in GAs remained primarily theoretical.

Industrial goods

The world's first genetic algorithm product-a mainframe-based toolset made for industrial processes-began to be sold by General Electric in the late 1980s. Evolver, the world's first commercial GA solution for desktop computers, was made available by Axcelis, Inc. in 1989. Evolver was the only interactive commercial genetic algorithm up to 1995, as the New York Times's John Markoff reported in 1990. The sixth version of Evolver, sold to Palisade in 1997 and translated into various languages, is now available. Since the 1990s, MATLAB has included two direct search algorithms (simple search and pattern search) as well as three derivative-free optimization heuristic techniques (simulated annealing, particle swarm optimization, and genetic algorithm).







Youtube For Videos Join Our Youtube Channel: Join Now

Feedback


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA