Practical Evolutionary Computation: Elitism

Posted in Evolutionary Computation, Java, Software Development by Dan on February 12th, 2009

In my previous article about evolutionary computation, I glossed over the concept of elitism. The Watchmaker Framework‘s evolve methods require you to specify an elite count. I told you to set this parameter to zero and forget about it. This brief article ties up that loose end by explaining how to use elitism to improve the performance of your evolutionary algorithm.

In an evolutionary algorithm (EA), sometimes good candidates can be lost when cross-over or mutation results in offspring that are weaker than the parents. Often the EA will re-discover these lost improvements in a subsequent generation but there is no guarantee of this. To combat this we can use a feature known as elitism. Elitism involves copying a small proportion of the fittest candidates, unchanged, into the next generation. This can sometimes have a dramatic impact on performance by ensuring that the EA does not waste time re-discovering previously discarded partial solutions. Candidate solutions that are preserved unchanged through elitism remain eligible for selection as parents when breeding the remainder of the next generation.

NOTE: One potential downside of elitism is that it may make it more likely that the evolution converges on a sub-optimal local maximum.

The Watchmaker Framework supports elitism via the second parameter to the evolve method of an EvolutionEngine. This elite count is the number of candidates in a generation that should be copied unchanged from the previous generation, rather than created via evolution. Collectively these candidates are the elite. So for a population size of 100, setting the elite count to 5 will result in the fittest 5% of each generation being copied, without modification, into the next generation.

If you run the Hello World example from the previous article both with and without elitism, you will see that it completes in fewer generations with elitism enabled (22 generations vs. 40 when I ran it – though your mileage may vary due to the random nature of the evolution).

Source code for the Hello World example (and several other, more interesting evolutionary programs) is included in the download.

This is the third in a short and irregular series of articles on practical Evolutionary Computation, based on the work-in-progress documentation for the Watchmaker Framework for Evolutionary Computation.  The first article provided an introduction to the field of evolutionary computation and the second article showed how to implement a simple evolutionary algorithm using the Watchmaker Framework.

Further Reading

An Introduction to Genetic Algorithms Introduction to Evolutionary Computing The Blind Watchmaker

Practical Evolutionary Computation: Implementation

Posted in Evolutionary Computation, Java by Dan on January 21st, 2009

If an evolutionary algorithm is a good fit for a particular problem, there are plenty of options when it comes to implementing it. You may choose to use a high-level programming language for simplicity, or a low-level language for performance. You could write all of the code yourself from scratch, or you could reuse pre-written components and libraries. In this article we will necessarily be using one particular approach, but it is worth noting that there are alternatives.

Evolution Frameworks

As we saw previously, the basic outline of an evolutionary algorithm is fairly straightforward. It consists of a main loop that performs one generation per iteration, supplemented by a few functions to perform fitness evaluation, selection and mutation/cross-over. When implementing a simple EA, writing this structural code is not particularly onerous. However, if you write many different evolutionary programs, you end up writing code that is very similar over and over again.

A good programmer will usually want to extract and reuse this common code. Once you have done this, you have the basis of an evolutionary computation framework. Typically this will consist of an evolution engine that is reusable and that can accept different functions to customise fitness evaluation, selection and evolutionary operators.

An alternative to using a home-grown framework is to choose a ready-made one. There are open source evolutionary computation frameworks available for most programming languages. For popular languages, such as C, C++ and Java, there are dozens.

The advantage of a ready-made framework that is used by many other programmers is that it will have been well tested and should be free of significant bugs and performance problems. It may also provide advanced features such as parallel and/or distributed processing.

The Watchmaker Framework

The Watchmaker Framework for Evolutionary Computation is an extensible, high-performance, object-oriented framework for implementing platform-independent evolutionary algorithms in Java. It is freely available under a permissive Open Source licence.

This article introduces the core components of the Watchmaker Framework and shows how they can be used to implement simple evolutionary algorithms such as the “Hello World” example outlined previously.

The Evolution Engine

The central object of an evolutionary program built with the Watchmaker Framework is the evolution engine. An evolution engine is a general-purpose implementation of the evolutionary algorithm outline introduced previously.

The framework provides multiple implementations of the EvolutionEngine interface, but the one that you will usually want to use is ConcurrentEvolutionEngine. As its name suggests, it takes advantage of the parallel processing facilities of your computer in order to speed-up the evolutionary process.

An EvolutionEngine has a single generic type parameter that indicates the type of object that it can evolve. For the “Hello World” program, we need to be able to evolve Java strings. Code that creates an engine that can evolve strings would look something like this:

EvolutionEngine<String> engine
    = new ConcurrentEvolutionEngine<String>(candidateFactory,

Once you have created an EvolutionEngine, your program is as simple as calling the evolve method with appropriate arguments. However, as you can see from the code snippet above, there is a little bit of work to be done first in order to create an EvolutionEngine that is configured appropriately for the given problem. The constructor of the ConcurrentEvolutionEngine class requires five objects. These are:

  • A Candidate Factory
  • An Evolutionary Operator
  • A Fitness Evaluator
  • A Selection Strategy
  • A Random Number Generator

The Candidate Factory

The first object that needs to be plugged into the evolution engine is a candidate factory. Every evolutionary simulation must start with an initial population of candidate solutions and the CandidateFactory interface is the mechanism by which the evolution engine creates this population.

A candidate factory implementation has an associated type. It can only create objects of that type. The type must match the type of the evolution engine that it is plugged into. You can write your own implementation of CandidateFactory for your program or, if you are using a common type such as strings, lists or arrays, you may be able to use a ready-made factory from the org.uncommons.watchmaker.framework.factories package.

For our “Hello World” program, we can use the provided StringFactory:

// Define the set of permitted characters (A-Z plus space).
char[] chars = new char[27];
for (char c = 'A'; c <= 'Z'; c++)
    chars[c - 'A'] = c;
chars[26] = ' ';
// Factory for random 11-character Strings.
CandidateFactory<String> factory = new StringFactory(chars, 11);

Tip: When writing your own CandidateFactory implementations, it is easiest to extend the provided AbstractCandidateFactory base class since there is then only a single method that must be implemented.

Evolutionary Operators

Evolutionary operators are the components that perform the actual evolution of a population. Cross-over is an evolutionary operator, as is mutation.

In the Watchmaker Framework, evolutionary operators are defined in terms of the EvolutionaryOperator interface. This interface declares a single method that takes a list of selected individuals and returns a list of evolved individuals. Some operators (i.e. mutation) will process one individual at a time, whereas others will process individuals in groups (cross-over processes two individuals at a time).

As with candidate factories, evolutionary operators have associated types that must be compatible with the type of the evolution engine that they are used with. And, as with candidate factories, the framework provides several ready-made operators for common types. These can be found in the org.uncommons.watchmaker.framework.operators package. The cross-over and mutation operators that we need for our “Hello World” program are provided by the StringCrossover and StringMutation classes.

The Evolution Pipeline

Alert readers will have noticed that the evolution engine constructor only accepts a single evolutionary operator. So how can we use both cross-over and mutation? The answer is provided by the EvolutionPipeline operator. This is a compound evolutionary operator that chains together multiple operators of the same type.

List<EvolutionaryOperator<String>> operators
    = new LinkedList<EvolutionaryOperator<String>>();
operators.add(new StringCrossover());
operators.add(new StringMutation(chars, new Probability(0.02)));
EvolutionaryOperator<String> pipeline = new EvolutionPipeline<String>(operators);

Note: The evolution pipeline is just one of many useful operators included in the org.uncommons.watchmaker.framework.operators package. Elaborate evolution schemes can be constructed from combinations of these operators. Users of the Watchmaker Framework should take a few minutes to browse the API documentation and familiarise themselves with the available classes.

The Fitness Evaluator

So far we’ve been able to build our evolutionary program by simply combining instances of classes provided by the framework. There is one part of the program that we will always have to write for ourselves though and that is the fitness fuction, which is necessarily different for every program.

In the Watchmaker Framework, a fitness function is written by implementing the FitnessEvaluator interface. The getFitness method of this interface takes a candidate solution and returns its fitness score as a Java double. The method actually takes two arguments, but we can ignore the second for now.

The listing below is a fitness evaluator for the “Hello World” program. It simply assigns one point for each character in the candidate string that matches the corresponding position in the target string.

public class StringEvaluator implements FitnessEvaluator<String>
    private final String targetString = "HELLO WORLD";
      * Assigns one "fitness point" for every character in the
      * candidate String that matches the corresponding position in
      * the target string.
    public double getFitness(String candidate,
                             List<? extends String> population)
        int matches = 0;
        for (int i = 0; i < candidate.length(); i++)
            if (candidate.charAt(i) == targetString.charAt(i))
        return matches;
    public boolean isNatural()
        return true;

By some fitness measures, a higher value indicates a fitter solution. In other cases a lower value is better. The isNatural method of a fitness evaluator simply specifies which scenario applies. In Watchmaker Framework terminology, a natural fitness function is one that returns higher values for fitter individuals.

Selection Strategy

Selection is a key ingredient in any evolutionary algorithm. It’s what determines which individuals survive to reproduce and which are discarded. All we’ve said about selection so far is that it should favour fitter individuals. This definition permits several different implementations. The Watchmaker Framework includes all of the most common selection strategies in the org.uncommons.watchmaker.framework.selection package. These are sufficient for most evolutionary algorithms but, if necessary, it is straightforward to write your own implementation of the SelectionStrategy interface.

Some selection strategies work better than others for certain problems. Often a little trial-and-error is required to pick the best option. For now we will just create an instance of the RouletteWheelSelection class and use that for our “Hello World” application.  Roulette wheel selection is the most common type of fitness-proportionate selection. It gives all individuals a chance of being selected but favours the fitter individuals since an individual’s selection probability is derived from its fitness score.

Random Number Generator

The final dependency that must be satisfied in order to create an evolution engine is the random number generator (RNG). An evolution engine has a single RNG that it passes to its candidate factory, evolutionary operator and selection strategy. A discussion of the merits of various RNGs is beyond the scope of this article.  The standard java RNG (java.util.Random) is flawed, so instead we will use the provided org.uncommons.maths.random.MersenneTwisterRNG.

Completing the Jigsaw

We’ve now got all of the necessary pieces to complete the “Hello World” example application. Assuming that you’ve already created the StringEvaluator class (defined above) in a separate file, the code needed to create the evolution engine looks like this:

// Create a factory to generate random 11-character Strings.
char[] chars = new char[27];
for (char c = 'A'; c <= 'Z'; c++)
    chars[c - 'A'] = c;
chars[26] = ' ';
CandidateFactory<String> factory = new StringFactory(chars, 11);
// Create a pipeline that applies cross-over then mutation.
List<EvolutionaryOperator<String>> operators
    = new LinkedList<EvolutionaryOperator<String>>();
operators.add(new StringMutation(chars, new Probability(0.02)));
operators.add(new StringCrossover());
EvolutionaryOperator<String> pipeline = new EvolutionPipeline<String>(operators);
FitnessEvaluator<String> fitnessEvaluator = new StringEvaluator();
SelectionStrategy<Object> selection = new RouletteWheelSelection();
Random rng = new MersenneTwisterRNG();
EvolutionEngine<String> engine
    = new ConcurrentEvolutionEngine<String>(factory,

The listing above only creates the evolution engine, it does not perform any evolution. For that we need to call the evolve method. The evolve method takes three parameters. The first is the size of the population. This is the number of candidate solutions that exist at any time. A bigger population will often result in a satisfactory solution being found in fewer generations. On the other hand, the processing of each generation will take longer because there are more individuals to deal with. For the “Hello World” program, a population size of 10 is fine.

The second parameter is concerned with elitism. For now, just use a value of zero. The final varargs parameter specifies one or more termination conditions.

Termination Conditions

Termination conditions make the evolution stop. There are a few reasons why we would like the evolution to stop. The most obvious is when it has found the solution that we are looking for. In the case of the “Hello World” program, that is when we have found the target string. The target string has a fitness score of 11, so we use the TargetFitness condition.

To complete the evolutionary “Hello World” application, add the following two lines:

String result = engine.evolve(10, 0, new TargetFitness(11));

Note: When we move on to less trivial evolutionary programs, we will rarely be able to specify the outcome so precisely. The org.uncommons.watchmaker.framework.termination package includes other termination conditions that can be used. For example, we may want the program to run for a certain period of time, or a certain number of generations, and then return the best solution it has found up until that point. The ElapsedTime and GenerationCount conditions provide this functionality. Alternatively, we may want the program to continue as long as it is finding progressively better solutions. The Stagnation condition will terminate the evolution after a set number of generations pass without any improvement in the fitness of the fittest candidate. If multiple termination conditions are specified, the evolution will stop as soon as any one of them is satisfied.

Evolution Observers

Compile and run the above code and, perhaps after a brief pause, you’ll see the following output:


This is quite probably the most convoluted “Hello World” program you’ll ever write. It also gives no hints as to its evolutionary nature. We can make the program more interesting by adding an EvolutionObserver to report on the progress of the evolution at the end of each generation. Add the following code to your program before the call to the evolve method:

engine.addEvolutionObserver(new EvolutionObserver<String>()
    public void populationUpdate(PopulationData<? extends String> data)
        System.out.printf("Generation %d: %sn",

Re-compile the program and run it again. This time you’ll see all of the steps taken to arrive at the target string:

  Generation 0: JIKDORHOQZJ
  Generation 1: ULLLFQWZPXG
  Generation 2: UEULKFVFZLS
  Generation 3: KLLLFKZGRLS
  Generation 4: HLLLFKZGRLS
  Generation 5: HEDPOYWOZLS
  Generation 6: HEULKIWWZLD
  Generation 7: HPRLOYWOZLS
  Generation 8: HEULOYWOZLS
  Generation 9: HEULOYWORLS
  Generation 10: HEULOYWORLS
  Generation 11: HPLLK WQRLH
  Generation 12: HEBLOYWQRLS
  Generation 13: HEULOYWOBLA
  Generation 14: HEBLOIWMRLD
  Generation 15: HEBLOIWMRLD
  Generation 16: HEYLFNWQRLD
  Generation 17: HEBLOIWORLS
  Generation 18: HEBLOIWORLT
  Generation 19: HEBLOKWGRLD
  Generation 20: HELLAYWORLS
  Generation 21: HELHOIWORLT
  Generation 22: HEWLOIWORLS
  Generation 23: HEBLOYCORLD
  Generation 24: HELLKQWORLD
  Generation 25: HELLOIWORLT
  Generation 26: HELLOIWORLS
  Generation 27: HELLKQWORLD
  Generation 28: HELLFYWORLD
  Generation 29: HELLOIWORLD
  Generation 30: HELLOIWORLD
  Generation 31: HELLOIWORLD
  Generation 32: HELLOIWORLD
  Generation 33: HELLOIWORLD
  Generation 34: HELLOIWORLD
  Generation 35: HELLOIWDRLD
  Generation 36: HELLOIWORLD
  Generation 37: HELLOIWORLD
  Generation 38: HELLOPWORLD
  Generation 39: HELLOIWORLD
  Generation 40: HELLO WORLD

This is the second in a short series of articles on practical Evolutionary Computation.  The text is taken from the work-in-progress documentation for the Watchmaker Framework for Evolutionary Computation.  The first article provided an introduction to the field of evolutionary computation.

Further Reading

An Introduction to Genetic Algorithms Introduction to Evolutionary Computing The Blind Watchmaker

Practical Evolutionary Computation: An Introduction

Posted in Evolutionary Computation, Software Development by Dan on January 20th, 2009

Software is normally developed in a very precise, deterministic way. The behaviour of a computer is governed by strict logical rules. A computer invariably does exactly what it is told to do.

When writing a program to solve a particular problem, software developers will identify the necessary sub-tasks that the program must perform. Algorithms are chosen and implemented for each task. The completed program becomes a detailed specification of exactly how to get from A to B. Every aspect is carefully designed by its developers who must understand how the various components interact to deliver the program’s functionality.

This prescriptive approach to solving problems with computers has served us well and is responsible for most of the software applications that we use today. However, it is not without limitations. Solutions to problems are constrained by the intuition, knowledge and prejudices of those who develop the software. The programmers have to know exactly how to solve the problem.

Another characteristic of the prescriptive approach that is sometimes problematic is that it is best suited to finding exact answers. Not all problems have exact solutions, and some that do may be too computationally expensive to solve. Sometimes it is more useful to be able to find an approximate answer quickly than to waste time searching for a better solution.

What are Evolutionary Algorithms?

Evolutionary algorithms (EAs) are inspired by the biological model of evolution and natural selection first proposed by Charles Darwin in 1859. In the natural world, evolution helps species adapt to their environments. Environmental factors that influence the survival prospects of an organism include climate, availability of food and the dangers of predators.

Species change over the course of many generations. Mutations occur randomly. Some mutations will be advantageous, but many will be useless or detrimental. Progress comes from the feedback provided by non-random natural selection. For example, organisms that can survive for long periods without water will be more likely to thrive in dry conditions than those that can’t. Likewise, animals that can run fast will be more successful at evading predators than their slower rivals. If a random genetic modification helps an organism to survive and to reproduce, that modification will itself survive and spread throughout the population, via the organism’s offspring.

Evolutionary algorithms are based on a simplified model of this biological evolution. To solve a particular problem we create an environment in which potential solutions can evolve. The environment is shaped by the parameters of the problem and encourages the evolution of good solutions.

The field of Evolutionary Computation encompasses several types of evolutionary algorithm. These include Genetic Algorithms (GAs), Evolution Strategies, Genetic Programming (GP), Evolutionary Programming and Learning Classifier Systems.

The most common type of evolutionary algorithm is the generational genetic algorithm.  The basic outline of a generational GA is as follows (most other EA variants are broadly similar).  A population of candidate solutions is iteratively evolved over many generations. Mimicking the concept of natural selection in biology, the survival of candidates (or their offspring) from generation to generation in an EA is governed by a fitness function that evaluates each candidate according to how close it is to the desired outcome, and a selection strategy that favours the better solutions. Over time, the quality of the solutions in the population should improve. If the program is successful, we can terminate the evolution once it has found a solution that is good enough.

An Example

Now that we have introduced the basic concepts and terminology, I will attempt to illustrate by way of an example. Suppose that we want to use evolution to generate a particular character string, for example “HELLO WORLD”. This is a contrived example in as much as it assumes that we don’t know how to create such a string and that evolution is the best approach available to us. However, bear with me as this simple example is useful for demonstrating exactly how the evolutionary approach works.

Each candidate solution in our population will be a string. We’ll use a fixed-length representation so that each string is 11 characters long. Each character in a string will be one of the 27 valid characters (the upper case letters ‘A’ to ‘Z’ plus the space character).

For the fitness function we’ll use the simple approach of assigning a candidate solution one point for each position in the string that has the correct character. For the string “HELLO WORLD” this gives a maximum possible fitness score of 11 (the length of the string).

The first task for the evolutionary algorithm is to randomly generate the initial population. We can use any size population that we choose. Typical EA population sizes can vary from tens to thousands of individuals. For this example we will use a population size of 10. After the initialisation of the population we might have the following candidates (fitness scores in brackets):

  1.  GERZUNFXCEN  (1)
  2.  HSFDAHDMUYZ  (1)
  3.  UQ IGARHGJN  (0)
  4.  ZASIB WSUVP  (2)
  5.  XIXROIUAZBH  (1)
  6.  VDLGCWMBFYA  (1)
  7.  SY YUHYRSEE  (0)
  8.  EUSVBIVFHFK  (0)
  9.  HHENRFZAMZH  (1)
  10. UJBBDFZPLCN  (0)

None of these candidate solutions is particularly good. The best (number 4) has just two characters out of eleven that match the target string (the space character and the ‘W’).

The next step is to select candidates based on their fitness and use them to create a new generation.  One technique for favouring the selection of fitter candidates over weaker candidates is to assign each candidate a selection probability proportionate to its fitness.

If we use fitness-proportionate selection, none of the candidates with zero fitness will be selected and the candidate with a fitness of 2 is twice as likely to be selected as any of the candidates with a fitness of 1. For the next step we need to select 10 parents, so it is obvious that some of the fit candidates are going to be selected multiple times.

Now that we have some parents, we can breed the next generation. We do this via a process called cross-over, which is analogous to sexual reproduction in biology. For each pair of parents, a cross-over point is selected randomly. Assuming that the first two randomly selected parents are numbers 2 and 4, if the cross-over occurs after the first four characters, we will get the following offspring:

  Parent 1:     HSFDAHDMUYZ
  Parent 2:     ZASIB WSUVP
  Offspring 1:  HSFDB WSUVP
  Offspring 2:  ZASIAHDMUYZ

This recombination has given us two new candidates for the next generation, one of which is better than either of the parents (offspring 1 has a fitness score of 3). This shows how cross-over can lead towards better solutions. However, looking at the initial population as a whole, we can see that no combination of cross-overs will ever result in a candidate with a fitness higher than 6. This is because, among all 10 original candidates, there are only 6 positions in which we have the correct character.

This can be mitigated to some extent by increasing the size of the population. With 100 individuals in the initial population we would be much more likely to have the necessary building blocks for a perfect solution, but there is no guarantee. This is where mutation comes in.

Mutation is implemented by modifying each character in a string according to some small probability, say 0.02 or 0.05. This means that any single individual will be changed only slightly by mutation, or perhaps not at all.

By applying mutation to each of the offspring produced by cross-over, we will occasionally introduce correct characters in new positions. We will also occasionally remove correct characters but these bad mutations are unlikely to survive selection in the next generation, so this is not a big problem. Advantageous mutations will be propagated by cross-over and selection and will quickly spread throughout the population.

After repeating this process for dozens or perhaps even hundreds of generations we will eventually converge on our desired solution.

This is a convoluted process for finding a string that we already knew to start with. However, as we shall see later, the evolutionary approach generalises to deal with problems where we don’t know what the best solution is and therefore can’t encode that knowledge in our fitness function.

The important point demonstrated by this example is that we can arrive at a satisfactory solution without having to enumerate every possible candidate in the search space. Even for this trivial example, a brute force search would involve generating and checking approximately 5.6 quadrillion strings.

The Outline of an Evolutionary Algorithm

  1. Genesis – Create an initial set (population) of n candidate solutions. This may be done entirely randomly or the population may be seeded with some hand-picked candidates.
  2. Evaluation – Evaluate each member of the population using some fitness function.
  3. Survival of the Fittest – Select a number of members of the evaluated population, favouring those with higher fitness scores. These will be the parents of the next generation.
  4. Evolution – Generate a new population of offspring by randomly altering and/or combining elements of the parent candidates. The evolution is performed by one or more evolutionary operators. The most common operators are cross-over and mutation. Cross-over takes two parents, cuts them each into two or more pieces and recombines the pieces to create two new offspring. Mutation copies an individual but with small, random modifications (such as flipping a bit from zero to one).
  5. Iteration – Repeat steps 2-4 until a satisfactory solution is found or some other termination condition is met (such as the number of generations or elapsed time).

When are Evolutionary Algorithms Useful?

Evolutionary algorithms are typically used to provide good approximate solutions to problems that cannot be solved easily using other techniques. Many optimisation problems fall into this category. It may be too computationally-intensive to find an exact solution but sometimes a near-optimal solution is sufficient. In these situations evolutionary techniques can be effective. Due to their random nature, evolutionary algorithms are never guaranteed to find an optimal solution for any problem, but they will often find a good solution if one exists.

One example of this kind of optimisation problem is the challenge of timetabling. Schools and universities must arrange room and staff allocations to suit the needs of their curriculum. There are several constraints that must be satisfied. A member of staff can only be in one place at a time, they can only teach classes that are in their area of expertise, rooms cannot host lessons if they are already occupied, and classes must not clash with other classes taken by the same students. This is a combinatorial problem and known to be NP-Hard. It is not feasible to exhaustively search for the optimal timetable due to the huge amount of computation involved. Instead, heuristics must be used. Genetic algorithms have proven to be a successful way of generating satisfactory solutions to many scheduling problems.

Evolutionary algorithms can also be used to tackle problems that humans don’t really know how to solve. An EA, free of any human preconceptions or biases, can generate surprising solutions that are comparable to, or better than, the best human-generated efforts. It is merely necessary that we can recognise a good solution if it were presented to us, even if we don’t know how to create a good solution. In other words, we need to be able to formulate an effective fitness function.

NASA ESG evolved antenna.Engineers working for NASA know a lot about physics. They know exactly which characteristics make for a good communications antenna. But the process of designing an antenna so that it has the necessary properties is hard. Even though the engineers know what is required from the final antenna, they may not know how to design the antenna so that it satisfies those requirements.

NASA’s Evolvable Systems Group has used evolutionary algorithms to successfully evolve antennas for use on satellites. These evolved antennas (pictured) have irregular shapes with no obvious symmetry. It is unlikely that a human expert would have arrived at such an unconventional design. Despite this, when tested these antennas proved to be extremely well adapted to their purpose.

Other Examples of Evolutionary Computation in Action


There are two requirements that must be met before an evolutionary algorithm can be used for a particular problem. Firstly, we need a way to encode candidate solutions to the problem. The simplest encoding, and that used by many genetic algorithms, is a bit string. Each candidate is simply a sequence of zeros and ones. This encoding makes cross-over and mutation very straightforward, but that does not mean that you cannot use more complicated representations. In fact, most of the examples listed in the previous section used more sophisticated candidate representations. As long as we can devise a scheme for evolving the candidates, there really is no restriction on the types that we can use. Genetic programming (GP) is a good example of this. GP evolves computer programs represented as syntax trees.

The second requirement for applying evolutionary algorithms is that there must be a way of evaluating partial solutions to the problem – the fitness function. It is not sufficient to evaluate solutions as right or wrong, the fitness score needs to indicate how right or, if your glass is half empty, how wrong a candidate solution is. So a function that returns either 0 or 1 is useless. A function that returns a score on a scale of 1 – 100 is better. We need shades of grey, not just black and white, since this is how the algorithm guides the random evolution to find increasingly better solutions.

This is the first in a short series of articles on practical Evolutionary Computation.  The text is taken from the work-in-progress documentation for the Watchmaker Framework for Evolutionary Computation.  The next article will demonstrate how to implement evolutionary algorithms in Java using the Watchmaker Framework.

Further Reading

An Introduction to Genetic Algorithms Introduction to Evolutionary Computing The Blind Watchmaker

Watchmaker Framework for Evolutionary Computation – Version 0.5.0

Posted in Evolutionary Computation, Java by Dan on December 10th, 2008

It’s been very nearly a year since the last release (0.4.3) of the Watchmaker Framework for Evolutionary Computation so, before 2008 disappears completely, it’s time for a new version that includes some of the stuff that I’ve been working on intermittently during this time.


The primary purpose of the 0.5.0 release is to break backwards-compatibility and upset users.  If you’re already using 0.4.3, there are two main changes that you need to be aware of:

Firstly, the StandaloneEvolutionEngine class has been renamed to ConcurrentEvolutionEngine. The old name did not reflect what distinguished this EvolutionEngine implementation from other implementations (some of which are yet to be added but will be in future releases). One such alternative is the new SequentialEvolutionEngine, which performs all work on the request thread and, as such, is useful in managed environments that do not permit control over threading.

Secondly, all methods and constructors that take probabilities as parameters now use the new Probability type rather than Java doubles. There was too much duplication of range-checking code and code that generated events with a given probability. All of this has now been encapsulated in one place.

Genetic Programming

The Watchmaker download includes source code for several example applications. New in this release is the tree-based genetic programming (GP) example. Genetic Programming is a type of evolutionary algorithm that involves evolving computer programs. The Watchmaker Framework is a general-purpose evolutionary computation framework.  It does not include any GP-specific classes but there is nothing preventing you from applying it to the task of evolving program trees. This example is a Watchmaker-powered Java implementation of the first GP problem presented in Toby Segaran’s book, Programming Collective Intelligence. From a set of example inputs and outputs, it generates a program that encapsulates the formula for converting the inputs to the outputs.

Useful Links

Distributed Evolutionary Algorithms with Watchmaker and Hadoop

Posted in Evolutionary Computation, Java by Dan on October 1st, 2008

One feature that has been on the TODO list of the Watchmaker Framework for Evolutionary Computation for some time is the ability to distribute the evolution across several machines.  Some time last year I started on a RMI-based solution, but I wasn’t happy with it so I deleted it and put the idea on the back burner while I concentrated on other things.  At some point I wanted to investigate using Terracotta, or possibly Hadoop, to distribute the computations.

However, it’s often the case with Open Source software that somebody smarter comes along and does the hard work for you.  I was delighted to find out today that Abdel Hakim Deneche has been busy integrating Watchmaker with the Apache Mahout project as part of Google’s Summer of Code programme.

I’d never heard of Mahout before.  According to Wikipedia, a Mahout is somebody who drives an elephant.  Apache Mahout is a sub-project of Lucene, the Java text search and indexing engine.  The Mahout project is focused on building scalable machine-learning libraries using Hadoop (presumably where the elephant connection comes in).

I haven’t yet tried using the Mahout software, but it looks like it provides a pretty straightforward way to distribute the fitness evaluations for just about any evolutionary algorithm implemented using Watchmaker.

Free Genetic Programming Book

Posted in Books, Evolutionary Computation by Dan on April 7th, 2008

I just spotted this on  The authors of a new book called A Field Guide to Genetic Programming have made it available for download in PDF form free of charge.  Weighing in at around 200 pages, it looks like a reasonably concise introduction to the topic (unlike some of the huge and hideously expensive GP books you can buy on Amazon).

This is good timing for me because I’ve recently started hacking together a GP example application to include in the next release of the Watchmaker Framework for Evolutionary Computation.  So I can catch up on a bit of background reading to make sure I’m doing things sensibly.  Watchmaker is a general purpose evolution framework, intended to address the full range of evolutionary algorithms.  I’ve been claiming for a while that you can use it for genetic programming, so I thought it was about time I demonstrated this.  I’m not aware of anybody having used Watchmaker for GP so far.  I’d love to hear from anybody who has done so.

Genetic programming is also covered in an accessible way in Toby Segaran’s excellent book, Programming Collective Intelligence, which includes GP examples in Python.

A Java Programmer’s Guide to Random Numbers, Part 1: Beyond java.util.Random

Posted in Evolutionary Computation, Java by Dan on April 3rd, 2008

More than you ever wanted to know about randomness in Java programs.

This is the first in a series of articles about random numbers in Java programs. The series covers the following topics and also serves as an introduction to the random numbers package of the Uncommons Maths library:

  • True Random Numbers and Pseudorandom Numbers
  • Statistical Quality (making sure the dice are not loaded)
  • Performance (because sometimes you really do need half a million random numbers a second)
  • Different kinds of randomness (because not everything can be modelled by tossing a coin)
  • Degrees of Freedom (is every possible outcome actually possible?)
  • Security and Integrity (when not losing money depends on nobody knowing what happens next)

Part 1: Beyond java.util.Random

Random numbers are useful in a wide variety of software applications. They provide a crucial element of uncertainty in an otherwise deterministic world. Without random numbers in computers, the hugely popular online poker industry would not exist, video games would be boring and predictable, iTunes would have no shuffle, cryptography would be much more difficult, and many innovative algorithms, such as those used in artificial intelligence and evolutionary computation, simply couldn’t work.

True Random Numbers and Pseudorandom Numbers

“Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.” – John von Neumann

Before continuing, it is important to make the distinction between so-called “true” random numbers and pseudorandom numbers. Though it may often seem otherwise to us as programmers, computer systems are infuriatingly deterministic. They are generally incapable of doing things randomly. To get a computer to behave in a way that is truly random, it is necessary to introduce some non-deterministic input. We could have somebody constantly rolling dice or flipping coins and typing in the results, but that is not very practical. A slightly more feasible approach is to construct a device that observes some real world phenomenon that is known to be unpredictable, such as radioactive decay or atmospheric noise. Data extracted from these events can be used a source of entropy for our applications. You could purchase a device that plugs into a serial port or USB port. To access these devices from Java you’d probably have to use C/C++ and JNI. Alternatively, you could get true random numbers indirectly – from an online service such as or Hotbits.

Since we can get truly unpredictable random numbers from this kind of hardware, why don’t we satisfy all of our randomness requirements in this way? Well the primary problem is throughput. These devices are quite limited in the quantity of randomness they can produce. They simply aren’t fast enough for many uses.

Pseudorandom numbers are not really random at all. They are the result of deterministic mathematical formulae. The best of these algorithms have been devised so that their output is statistically indistinguishable from true random numbers. PRNGs start with a single numeric seed value. The algorithm is applied to this seed to generate the output and a new, updated seed that is used to generate the next value. The mathematics involved is beyond the scope of this article – the definitive guide is the second volume of Donald Knuth’s The Art of Computer Programming.

An interesting property of this approach is that if you always start with the same seed value, you will always get the same sequence of “random” numbers. Though this can occasionally be useful, you would normally strive to avoid using the same seed value in the interests of unpredictability. A simple approach that is sufficient in many cases is to seed the PRNG from the current value of the system clock.

Aside from speed, another advantage of pseudorandom number generators (PRNGs) over true random number generators (TRNGs) is that they are more predictably unpredictable. That is, the statistical properties of PRNGs are more reliable than is the case with TRNGs.

Statistical Quality

Why not java.util.Random?

The Java API helpfully provides a PRNG that we can use, the java.util.Random class. If you are a Java programmer you will almost certainly have used it at some point. For non-critical random numbers, e.g. adding some unpredictability to a game, it’s fine. It’s even pretty quick. Unfortunately, it’s not random enough – not by the standards required for more serious applications.

The problem of deciding whether a sequence of numbers is random is not an easy one to solve. You can’t simply look at the output and decide that it doesn’t look random enough. After all, if you toss a coin ten times, it could randomly come up heads every time, even though the probability of this sequence is pretty small. To get any kind of meaningful evaluation requires a large sample of the RNG output (perhaps millions of generated values). This sample can then be subjected to sophisticated statistical analysis.

Probably the best known test suite for random number generators is George Marsaglia’s Diehard Battery of Tests of Randomness. Diehard says that java.util.Random is not sufficiently random. But you don’t have to interpret Diehard’s complicated reports to see for yourself. This applet demonstrates it visibly.


So if not java.util.Random, how about Java’s other RNG, SecureRandom is built for cryptography so it is specifically designed to avoid any such flaws. Diehard reports no issues with its output. Unfortunately, this quality comes at a high price – performance. In benchmarks, SecureRandom can be up to 60 times slower at generating random numbers than java.util.Random. This is bearable if you are only generating random values infrequently, but if your program relies on generating random numbers non-stop and as fast as possible (as many simulations do) then it’s a show-stopping bottleneck.

The good news is that, beyond the core API, there are random number generators as fast as (or faster than) java.util.Random with statistical properties as good as SecureRandom. The Uncommons Maths project provides a comprehensive open source random numbers package. Uncommons Maths provides three Random Number Generators, with different properties, for a wide variety of applications. Unlike java.util.Random, each of these RNGs completes the Diehard suite without any problems. Additionally, each of the Uncommons Maths RNGs is a sub-class of java.util.Random, which means any of them can be used as a drop-in replacement with minimal effort.

A good general-purpose PRNG is the MersenneTwisterRNG class. It is a pure Java port of Makoto Matsumoto and Takuji Nishimura’s proven and ultra-fast Mersenne Twister PRNG for C. Even faster is CellularAutomatonRNG, the Java port of Tony Pasqualoni’s experimental cellular automaton PRNG.

In the next article: Beyond dice and coins – using the right probability distribution.

Watchmaker Framework for Evolutionary Computation 0.4.3

Posted in Evolutionary Computation, Java by Dan on December 14th, 2007

This is mostly a maintenance release. Uncommons Maths is now a separate project so the Watchmaker Framework has been modified to use the official version of that library. There are a few other minor tweaks (a couple of classes have been moved around, but nothing in the core framework).

Version 0.4.3 also introduces an experimental EvolutionMonitor component. This a Swing view that gives you some insight into the current state of the population while your evolutionary algorithm is running. In this first version all it does is graph the mean and peak fitness scores (using JFreeChart). Future versions will hopefully display more information (perhaps I will add an API to enable data to be extracted from the population while running). The EvolutionMonitor implements the EvolutionObserver interface so you can hook it up easily by calling the addEvolutionObserver method of your EvolutionEngine.

The other new feature is a new termination condition for terminating the algorithm when the population fitness begins to stagnate. If this condition is used and there is no fitness improvement within a specified number of generations, the evolution engine will assume that no further improvement can be made and will return the fittest individual found so far. This is often a more practical approach than specifying a maximum total number of generations or a fixed time limit in advance.

Book Review: Programming Collective Intelligence

Posted in Evolutionary Computation, Python, Software Development by Dan on December 13th, 2007

It’s called “Programming Collective Intelligence” and is presented as a book for building “Smart Web 2.0 Applications” but it is essentially an extremely accessible explanation of a wide array of machine learning and data-mining algorithms. How do sites like Amazon and Last.FM make recommendations? How do search engines work? How does Google News manage to categorise and present the most important news articles without human intervention? How do you build a useful spam filter?

All of these questions are answered and compelling example applications are built step-by-step to demonstrate the power of the ideas presented here. Decision trees, genetic algorithms, neural networks, support vector machines, genetic programming, Bayesian classifiers and non-negative matrix factorisation are some of the techniques covered and all without the dry, maths-heavy text that normally fills books on these topics.

The examples throughout are exclusively in Python, which may have put me off had I realised this when I ordered it. I have nothing against Python except for my complete lack of experience with it. However, the examples are easy enough to understand for anybody familiar with other high-level languages. As result of reading the book, I may actually try my hand at a bit of Python hacking now.

How well do these techniques work? Well I’d never have found out about this book but for Amazon’s automated recommendations system. I’d thoroughly recommend this book to anyone looking to learn about interesting AI techniques without wading through opaque academic papers.

(If you find the genetic algorithms and genetic programming topics interesting, check out the Watchmaker Framework for Evolutionary Computation and some of the books recommended there.)

Announcing Uncommons Maths

Posted in Evolutionary Computation, Java by Dan on November 19th, 2007

Uncommons Maths is a Java library consisting of a comprehensive random numbers package and other useful mathematical utility classes. It was originally part of the Watchmaker Framework for Evolutionary Computation but, due to its usefulness in other domains, it has now been converted into a standalone project (Apache Licence).

This article briefly describes what’s available in this first public release. I am most definitely not a mathematician and, as such, this library is written by a programmer for programmers (but if mathematicians find it useful that’s good too). It includes classes that are useful in real world programs and is not intended to ever cover the full spectrum of mathematics. However, I hope that it will expand in scope over time in this spirit of pragmatism. To that end, suggestions and contributions are actively encouraged.

Random Number Generators

The Uncommons Maths library provides three easy-to-use, statistically-sound, high-performance pseudorandom number generators (RNGs). They are:

A Java port of the fast and reliable Mersenne Twister RNG originally developed by Makoto Matsumoto and Takuji Nishimura. This is faster1 than java.util.Random and does not have the statistical flaws2 of that RNG.
A Java port of Tony Pasqualoni’s ultra-fast Cellular Automaton RNG. It uses a 256-cell automaton to generate random values. To the best of my knowledge, this is the fastest1 available pure Java RNG that completes the Diehard test suite without any problems.
This is a cryptographically-strong3 non-linear RNG that is around 10x faster1 than Reverse-engineering the generator state from observations of its output would involve cracking the AES block cipher.

  1. A benchmark comparing the performance of these three RNGs and the two JDK RNGs can be found here (under the title “RNG Performance”).
  2. This applet demonstrates the non-randomness of java.util.Random.
  3. The algorithm is not the only security consideration for RNGs. The source, secrecy and integrity of the seed data is also vital. For highly sensitive applications, consider using something like Fortuna.

Probability Distributions

Using the included probability distribution wrappers, these RNGs (and the standard JDK ones) can be used to generate values from Uniform, Normal, Binomial, Poisson and Exponential distributions.

Permutations & Combinations

Uncommons Maths also includes generics-enabled combination and permutation generators. These are based on Java classes originally written by Michael Gilleland.


Uncommons Maths provides a statistical data set class that can calculate a variety of descriptive statistics (variance, median, standard deviation, arithmetic and geometric means, etc.) for a set of values.


Other useful features in Uncommons Maths include utility methods to complement those in java.lang.Math, and utility classes for manipulating binary data.

« Older Posts