Fixing WP-Syntax

Posted in The Internet by Dan on January 26th, 2009

WordPress is great. In some ways. Sometimes. It has all these plugins that add all kinds of neat features. It even has a really easy-to-use auto-update feature for plugins. So when I saw that a new version of Ryan McGeary’s very useful WP-Syntax plugin was available, I let it do the upgrade for me.

Unfortunately, this made a mess of my previous post (thanks to David for pointing this out to me – I wasn’t actually diligent enough to check for myself that the upgrade hadn’t broken anything). All the < and > characters in my Java code had been replaced by HTML entities (&gt; and &lt;). Rolling back to WP-Syntax 0.9.1 didn’t fix the problem, which was odd. Then a few neurons flickered into life and I had a vague recollection of having solved this problem previously. It turns out that I had modified the PHP source for the WP-Syntax plugin (as described here). I simply had to make the same change for the new version.  I’m ignorant as to why WP-Syntax does not include this modification by default. Maybe it breaks something else, but it works for me.

Practical Evolutionary Computation: Implementation

Posted in Evolutionary Computation, Java by Dan on January 21st, 2009

If an evolutionary algorithm is a good fit for a particular problem, there are plenty of options when it comes to implementing it. You may choose to use a high-level programming language for simplicity, or a low-level language for performance. You could write all of the code yourself from scratch, or you could reuse pre-written components and libraries. In this article we will necessarily be using one particular approach, but it is worth noting that there are alternatives.

Evolution Frameworks

As we saw previously, the basic outline of an evolutionary algorithm is fairly straightforward. It consists of a main loop that performs one generation per iteration, supplemented by a few functions to perform fitness evaluation, selection and mutation/cross-over. When implementing a simple EA, writing this structural code is not particularly onerous. However, if you write many different evolutionary programs, you end up writing code that is very similar over and over again.

A good programmer will usually want to extract and reuse this common code. Once you have done this, you have the basis of an evolutionary computation framework. Typically this will consist of an evolution engine that is reusable and that can accept different functions to customise fitness evaluation, selection and evolutionary operators.

An alternative to using a home-grown framework is to choose a ready-made one. There are open source evolutionary computation frameworks available for most programming languages. For popular languages, such as C, C++ and Java, there are dozens.

The advantage of a ready-made framework that is used by many other programmers is that it will have been well tested and should be free of significant bugs and performance problems. It may also provide advanced features such as parallel and/or distributed processing.

The Watchmaker Framework

The Watchmaker Framework for Evolutionary Computation is an extensible, high-performance, object-oriented framework for implementing platform-independent evolutionary algorithms in Java. It is freely available under a permissive Open Source licence.

This article introduces the core components of the Watchmaker Framework and shows how they can be used to implement simple evolutionary algorithms such as the “Hello World” example outlined previously.

The Evolution Engine

The central object of an evolutionary program built with the Watchmaker Framework is the evolution engine. An evolution engine is a general-purpose implementation of the evolutionary algorithm outline introduced previously.

The framework provides multiple implementations of the EvolutionEngine interface, but the one that you will usually want to use is ConcurrentEvolutionEngine. As its name suggests, it takes advantage of the parallel processing facilities of your computer in order to speed-up the evolutionary process.

An EvolutionEngine has a single generic type parameter that indicates the type of object that it can evolve. For the “Hello World” program, we need to be able to evolve Java strings. Code that creates an engine that can evolve strings would look something like this:

EvolutionEngine<String> engine
    = new ConcurrentEvolutionEngine<String>(candidateFactory,
                                            evolutionaryOperator,
                                            fitnessEvaluator,
                                            selectionStrategy,
                                            rng);

Once you have created an EvolutionEngine, your program is as simple as calling the evolve method with appropriate arguments. However, as you can see from the code snippet above, there is a little bit of work to be done first in order to create an EvolutionEngine that is configured appropriately for the given problem. The constructor of the ConcurrentEvolutionEngine class requires five objects. These are:

  • A Candidate Factory
  • An Evolutionary Operator
  • A Fitness Evaluator
  • A Selection Strategy
  • A Random Number Generator

The Candidate Factory

The first object that needs to be plugged into the evolution engine is a candidate factory. Every evolutionary simulation must start with an initial population of candidate solutions and the CandidateFactory interface is the mechanism by which the evolution engine creates this population.

A candidate factory implementation has an associated type. It can only create objects of that type. The type must match the type of the evolution engine that it is plugged into. You can write your own implementation of CandidateFactory for your program or, if you are using a common type such as strings, lists or arrays, you may be able to use a ready-made factory from the org.uncommons.watchmaker.framework.factories package.

For our “Hello World” program, we can use the provided StringFactory:

// Define the set of permitted characters (A-Z plus space).
char[] chars = new char[27];
for (char c = 'A'; c <= 'Z'; c++)
{
    chars[c - 'A'] = c;
}
chars[26] = ' ';
 
// Factory for random 11-character Strings.
CandidateFactory<String> factory = new StringFactory(chars, 11);

Tip: When writing your own CandidateFactory implementations, it is easiest to extend the provided AbstractCandidateFactory base class since there is then only a single method that must be implemented.

Evolutionary Operators

Evolutionary operators are the components that perform the actual evolution of a population. Cross-over is an evolutionary operator, as is mutation.

In the Watchmaker Framework, evolutionary operators are defined in terms of the EvolutionaryOperator interface. This interface declares a single method that takes a list of selected individuals and returns a list of evolved individuals. Some operators (i.e. mutation) will process one individual at a time, whereas others will process individuals in groups (cross-over processes two individuals at a time).

As with candidate factories, evolutionary operators have associated types that must be compatible with the type of the evolution engine that they are used with. And, as with candidate factories, the framework provides several ready-made operators for common types. These can be found in the org.uncommons.watchmaker.framework.operators package. The cross-over and mutation operators that we need for our “Hello World” program are provided by the StringCrossover and StringMutation classes.

The Evolution Pipeline

Alert readers will have noticed that the evolution engine constructor only accepts a single evolutionary operator. So how can we use both cross-over and mutation? The answer is provided by the EvolutionPipeline operator. This is a compound evolutionary operator that chains together multiple operators of the same type.

List<EvolutionaryOperator<String>> operators
    = new LinkedList<EvolutionaryOperator<String>>();
operators.add(new StringCrossover());
operators.add(new StringMutation(chars, new Probability(0.02)));
EvolutionaryOperator<String> pipeline = new EvolutionPipeline<String>(operators);

Note: The evolution pipeline is just one of many useful operators included in the org.uncommons.watchmaker.framework.operators package. Elaborate evolution schemes can be constructed from combinations of these operators. Users of the Watchmaker Framework should take a few minutes to browse the API documentation and familiarise themselves with the available classes.

The Fitness Evaluator

So far we’ve been able to build our evolutionary program by simply combining instances of classes provided by the framework. There is one part of the program that we will always have to write for ourselves though and that is the fitness fuction, which is necessarily different for every program.

In the Watchmaker Framework, a fitness function is written by implementing the FitnessEvaluator interface. The getFitness method of this interface takes a candidate solution and returns its fitness score as a Java double. The method actually takes two arguments, but we can ignore the second for now.

The listing below is a fitness evaluator for the “Hello World” program. It simply assigns one point for each character in the candidate string that matches the corresponding position in the target string.

public class StringEvaluator implements FitnessEvaluator<String>
{
    private final String targetString = "HELLO WORLD";
 
    /**
      * Assigns one "fitness point" for every character in the
      * candidate String that matches the corresponding position in
      * the target string.
      */
    public double getFitness(String candidate,
                             List<? extends String> population)
    {
        int matches = 0;
        for (int i = 0; i < candidate.length(); i++)
        {
            if (candidate.charAt(i) == targetString.charAt(i))
            {
                ++matches;
            }
        }
        return matches;
    }
 
    public boolean isNatural()
    {
        return true;
    }
}

By some fitness measures, a higher value indicates a fitter solution. In other cases a lower value is better. The isNatural method of a fitness evaluator simply specifies which scenario applies. In Watchmaker Framework terminology, a natural fitness function is one that returns higher values for fitter individuals.

Selection Strategy

Selection is a key ingredient in any evolutionary algorithm. It’s what determines which individuals survive to reproduce and which are discarded. All we’ve said about selection so far is that it should favour fitter individuals. This definition permits several different implementations. The Watchmaker Framework includes all of the most common selection strategies in the org.uncommons.watchmaker.framework.selection package. These are sufficient for most evolutionary algorithms but, if necessary, it is straightforward to write your own implementation of the SelectionStrategy interface.

Some selection strategies work better than others for certain problems. Often a little trial-and-error is required to pick the best option. For now we will just create an instance of the RouletteWheelSelection class and use that for our “Hello World” application.  Roulette wheel selection is the most common type of fitness-proportionate selection. It gives all individuals a chance of being selected but favours the fitter individuals since an individual’s selection probability is derived from its fitness score.

Random Number Generator

The final dependency that must be satisfied in order to create an evolution engine is the random number generator (RNG). An evolution engine has a single RNG that it passes to its candidate factory, evolutionary operator and selection strategy. A discussion of the merits of various RNGs is beyond the scope of this article.  The standard java RNG (java.util.Random) is flawed, so instead we will use the provided org.uncommons.maths.random.MersenneTwisterRNG.

Completing the Jigsaw

We’ve now got all of the necessary pieces to complete the “Hello World” example application. Assuming that you’ve already created the StringEvaluator class (defined above) in a separate file, the code needed to create the evolution engine looks like this:

// Create a factory to generate random 11-character Strings.
char[] chars = new char[27];
for (char c = 'A'; c <= 'Z'; c++)
{
    chars[c - 'A'] = c;
}
chars[26] = ' ';
CandidateFactory<String> factory = new StringFactory(chars, 11);
 
// Create a pipeline that applies cross-over then mutation.
List<EvolutionaryOperator<String>> operators
    = new LinkedList<EvolutionaryOperator<String>>();
operators.add(new StringMutation(chars, new Probability(0.02)));
operators.add(new StringCrossover());
EvolutionaryOperator<String> pipeline = new EvolutionPipeline<String>(operators);
 
FitnessEvaluator<String> fitnessEvaluator = new StringEvaluator();
SelectionStrategy<Object> selection = new RouletteWheelSelection();
Random rng = new MersenneTwisterRNG();
 
EvolutionEngine<String> engine
    = new ConcurrentEvolutionEngine<String>(factory,
                                            pipeline,
                                            fitnessEvaluator,
                                            selection,
                                            rng);

The listing above only creates the evolution engine, it does not perform any evolution. For that we need to call the evolve method. The evolve method takes three parameters. The first is the size of the population. This is the number of candidate solutions that exist at any time. A bigger population will often result in a satisfactory solution being found in fewer generations. On the other hand, the processing of each generation will take longer because there are more individuals to deal with. For the “Hello World” program, a population size of 10 is fine.

The second parameter is concerned with elitism. For now, just use a value of zero. The final varargs parameter specifies one or more termination conditions.

Termination Conditions

Termination conditions make the evolution stop. There are a few reasons why we would like the evolution to stop. The most obvious is when it has found the solution that we are looking for. In the case of the “Hello World” program, that is when we have found the target string. The target string has a fitness score of 11, so we use the TargetFitness condition.

To complete the evolutionary “Hello World” application, add the following two lines:

String result = engine.evolve(10, 0, new TargetFitness(11));
System.out.println(result);

Note: When we move on to less trivial evolutionary programs, we will rarely be able to specify the outcome so precisely. The org.uncommons.watchmaker.framework.termination package includes other termination conditions that can be used. For example, we may want the program to run for a certain period of time, or a certain number of generations, and then return the best solution it has found up until that point. The ElapsedTime and GenerationCount conditions provide this functionality. Alternatively, we may want the program to continue as long as it is finding progressively better solutions. The Stagnation condition will terminate the evolution after a set number of generations pass without any improvement in the fitness of the fittest candidate. If multiple termination conditions are specified, the evolution will stop as soon as any one of them is satisfied.

Evolution Observers

Compile and run the above code and, perhaps after a brief pause, you’ll see the following output:

  HELLO WORLD

This is quite probably the most convoluted “Hello World” program you’ll ever write. It also gives no hints as to its evolutionary nature. We can make the program more interesting by adding an EvolutionObserver to report on the progress of the evolution at the end of each generation. Add the following code to your program before the call to the evolve method:

engine.addEvolutionObserver(new EvolutionObserver<String>()
{
    public void populationUpdate(PopulationData<? extends String> data)
    {
        System.out.printf("Generation %d: %sn",
                                    data.getGenerationNumber(),
                                    data.getBestCandidate());
    }
});

Re-compile the program and run it again. This time you’ll see all of the steps taken to arrive at the target string:

  Generation 0: JIKDORHOQZJ
  Generation 1: ULLLFQWZPXG
  Generation 2: UEULKFVFZLS
  Generation 3: KLLLFKZGRLS
  Generation 4: HLLLFKZGRLS
  Generation 5: HEDPOYWOZLS
  Generation 6: HEULKIWWZLD
  Generation 7: HPRLOYWOZLS
  Generation 8: HEULOYWOZLS
  Generation 9: HEULOYWORLS
  Generation 10: HEULOYWORLS
  Generation 11: HPLLK WQRLH
  Generation 12: HEBLOYWQRLS
  Generation 13: HEULOYWOBLA
  Generation 14: HEBLOIWMRLD
  Generation 15: HEBLOIWMRLD
  Generation 16: HEYLFNWQRLD
  Generation 17: HEBLOIWORLS
  Generation 18: HEBLOIWORLT
  Generation 19: HEBLOKWGRLD
  Generation 20: HELLAYWORLS
  Generation 21: HELHOIWORLT
  Generation 22: HEWLOIWORLS
  Generation 23: HEBLOYCORLD
  Generation 24: HELLKQWORLD
  Generation 25: HELLOIWORLT
  Generation 26: HELLOIWORLS
  Generation 27: HELLKQWORLD
  Generation 28: HELLFYWORLD
  Generation 29: HELLOIWORLD
  Generation 30: HELLOIWORLD
  Generation 31: HELLOIWORLD
  Generation 32: HELLOIWORLD
  Generation 33: HELLOIWORLD
  Generation 34: HELLOIWORLD
  Generation 35: HELLOIWDRLD
  Generation 36: HELLOIWORLD
  Generation 37: HELLOIWORLD
  Generation 38: HELLOPWORLD
  Generation 39: HELLOIWORLD
  Generation 40: HELLO WORLD
  HELLO WORLD

This is the second in a short series of articles on practical Evolutionary Computation.  The text is taken from the work-in-progress documentation for the Watchmaker Framework for Evolutionary Computation.  The first article provided an introduction to the field of evolutionary computation.

Further Reading

An Introduction to Genetic Algorithms Introduction to Evolutionary Computing The Blind Watchmaker

Practical Evolutionary Computation: An Introduction

Posted in Evolutionary Computation, Software Development by Dan on January 20th, 2009

Software is normally developed in a very precise, deterministic way. The behaviour of a computer is governed by strict logical rules. A computer invariably does exactly what it is told to do.

When writing a program to solve a particular problem, software developers will identify the necessary sub-tasks that the program must perform. Algorithms are chosen and implemented for each task. The completed program becomes a detailed specification of exactly how to get from A to B. Every aspect is carefully designed by its developers who must understand how the various components interact to deliver the program’s functionality.

This prescriptive approach to solving problems with computers has served us well and is responsible for most of the software applications that we use today. However, it is not without limitations. Solutions to problems are constrained by the intuition, knowledge and prejudices of those who develop the software. The programmers have to know exactly how to solve the problem.

Another characteristic of the prescriptive approach that is sometimes problematic is that it is best suited to finding exact answers. Not all problems have exact solutions, and some that do may be too computationally expensive to solve. Sometimes it is more useful to be able to find an approximate answer quickly than to waste time searching for a better solution.

What are Evolutionary Algorithms?

Evolutionary algorithms (EAs) are inspired by the biological model of evolution and natural selection first proposed by Charles Darwin in 1859. In the natural world, evolution helps species adapt to their environments. Environmental factors that influence the survival prospects of an organism include climate, availability of food and the dangers of predators.

Species change over the course of many generations. Mutations occur randomly. Some mutations will be advantageous, but many will be useless or detrimental. Progress comes from the feedback provided by non-random natural selection. For example, organisms that can survive for long periods without water will be more likely to thrive in dry conditions than those that can’t. Likewise, animals that can run fast will be more successful at evading predators than their slower rivals. If a random genetic modification helps an organism to survive and to reproduce, that modification will itself survive and spread throughout the population, via the organism’s offspring.

Evolutionary algorithms are based on a simplified model of this biological evolution. To solve a particular problem we create an environment in which potential solutions can evolve. The environment is shaped by the parameters of the problem and encourages the evolution of good solutions.

The field of Evolutionary Computation encompasses several types of evolutionary algorithm. These include Genetic Algorithms (GAs), Evolution Strategies, Genetic Programming (GP), Evolutionary Programming and Learning Classifier Systems.

The most common type of evolutionary algorithm is the generational genetic algorithm.  The basic outline of a generational GA is as follows (most other EA variants are broadly similar).  A population of candidate solutions is iteratively evolved over many generations. Mimicking the concept of natural selection in biology, the survival of candidates (or their offspring) from generation to generation in an EA is governed by a fitness function that evaluates each candidate according to how close it is to the desired outcome, and a selection strategy that favours the better solutions. Over time, the quality of the solutions in the population should improve. If the program is successful, we can terminate the evolution once it has found a solution that is good enough.

An Example

Now that we have introduced the basic concepts and terminology, I will attempt to illustrate by way of an example. Suppose that we want to use evolution to generate a particular character string, for example “HELLO WORLD”. This is a contrived example in as much as it assumes that we don’t know how to create such a string and that evolution is the best approach available to us. However, bear with me as this simple example is useful for demonstrating exactly how the evolutionary approach works.

Each candidate solution in our population will be a string. We’ll use a fixed-length representation so that each string is 11 characters long. Each character in a string will be one of the 27 valid characters (the upper case letters ‘A’ to ‘Z’ plus the space character).

For the fitness function we’ll use the simple approach of assigning a candidate solution one point for each position in the string that has the correct character. For the string “HELLO WORLD” this gives a maximum possible fitness score of 11 (the length of the string).

The first task for the evolutionary algorithm is to randomly generate the initial population. We can use any size population that we choose. Typical EA population sizes can vary from tens to thousands of individuals. For this example we will use a population size of 10. After the initialisation of the population we might have the following candidates (fitness scores in brackets):

  1.  GERZUNFXCEN  (1)
  2.  HSFDAHDMUYZ  (1)
  3.  UQ IGARHGJN  (0)
  4.  ZASIB WSUVP  (2)
  5.  XIXROIUAZBH  (1)
  6.  VDLGCWMBFYA  (1)
  7.  SY YUHYRSEE  (0)
  8.  EUSVBIVFHFK  (0)
  9.  HHENRFZAMZH  (1)
  10. UJBBDFZPLCN  (0)

None of these candidate solutions is particularly good. The best (number 4) has just two characters out of eleven that match the target string (the space character and the ‘W’).

The next step is to select candidates based on their fitness and use them to create a new generation.  One technique for favouring the selection of fitter candidates over weaker candidates is to assign each candidate a selection probability proportionate to its fitness.

If we use fitness-proportionate selection, none of the candidates with zero fitness will be selected and the candidate with a fitness of 2 is twice as likely to be selected as any of the candidates with a fitness of 1. For the next step we need to select 10 parents, so it is obvious that some of the fit candidates are going to be selected multiple times.

Now that we have some parents, we can breed the next generation. We do this via a process called cross-over, which is analogous to sexual reproduction in biology. For each pair of parents, a cross-over point is selected randomly. Assuming that the first two randomly selected parents are numbers 2 and 4, if the cross-over occurs after the first four characters, we will get the following offspring:

  Parent 1:     HSFDAHDMUYZ
  Parent 2:     ZASIB WSUVP
  Offspring 1:  HSFDB WSUVP
  Offspring 2:  ZASIAHDMUYZ

This recombination has given us two new candidates for the next generation, one of which is better than either of the parents (offspring 1 has a fitness score of 3). This shows how cross-over can lead towards better solutions. However, looking at the initial population as a whole, we can see that no combination of cross-overs will ever result in a candidate with a fitness higher than 6. This is because, among all 10 original candidates, there are only 6 positions in which we have the correct character.

This can be mitigated to some extent by increasing the size of the population. With 100 individuals in the initial population we would be much more likely to have the necessary building blocks for a perfect solution, but there is no guarantee. This is where mutation comes in.

Mutation is implemented by modifying each character in a string according to some small probability, say 0.02 or 0.05. This means that any single individual will be changed only slightly by mutation, or perhaps not at all.

By applying mutation to each of the offspring produced by cross-over, we will occasionally introduce correct characters in new positions. We will also occasionally remove correct characters but these bad mutations are unlikely to survive selection in the next generation, so this is not a big problem. Advantageous mutations will be propagated by cross-over and selection and will quickly spread throughout the population.

After repeating this process for dozens or perhaps even hundreds of generations we will eventually converge on our desired solution.

This is a convoluted process for finding a string that we already knew to start with. However, as we shall see later, the evolutionary approach generalises to deal with problems where we don’t know what the best solution is and therefore can’t encode that knowledge in our fitness function.

The important point demonstrated by this example is that we can arrive at a satisfactory solution without having to enumerate every possible candidate in the search space. Even for this trivial example, a brute force search would involve generating and checking approximately 5.6 quadrillion strings.

The Outline of an Evolutionary Algorithm

  1. Genesis – Create an initial set (population) of n candidate solutions. This may be done entirely randomly or the population may be seeded with some hand-picked candidates.
  2. Evaluation – Evaluate each member of the population using some fitness function.
  3. Survival of the Fittest – Select a number of members of the evaluated population, favouring those with higher fitness scores. These will be the parents of the next generation.
  4. Evolution – Generate a new population of offspring by randomly altering and/or combining elements of the parent candidates. The evolution is performed by one or more evolutionary operators. The most common operators are cross-over and mutation. Cross-over takes two parents, cuts them each into two or more pieces and recombines the pieces to create two new offspring. Mutation copies an individual but with small, random modifications (such as flipping a bit from zero to one).
  5. Iteration – Repeat steps 2-4 until a satisfactory solution is found or some other termination condition is met (such as the number of generations or elapsed time).

When are Evolutionary Algorithms Useful?

Evolutionary algorithms are typically used to provide good approximate solutions to problems that cannot be solved easily using other techniques. Many optimisation problems fall into this category. It may be too computationally-intensive to find an exact solution but sometimes a near-optimal solution is sufficient. In these situations evolutionary techniques can be effective. Due to their random nature, evolutionary algorithms are never guaranteed to find an optimal solution for any problem, but they will often find a good solution if one exists.

One example of this kind of optimisation problem is the challenge of timetabling. Schools and universities must arrange room and staff allocations to suit the needs of their curriculum. There are several constraints that must be satisfied. A member of staff can only be in one place at a time, they can only teach classes that are in their area of expertise, rooms cannot host lessons if they are already occupied, and classes must not clash with other classes taken by the same students. This is a combinatorial problem and known to be NP-Hard. It is not feasible to exhaustively search for the optimal timetable due to the huge amount of computation involved. Instead, heuristics must be used. Genetic algorithms have proven to be a successful way of generating satisfactory solutions to many scheduling problems.

Evolutionary algorithms can also be used to tackle problems that humans don’t really know how to solve. An EA, free of any human preconceptions or biases, can generate surprising solutions that are comparable to, or better than, the best human-generated efforts. It is merely necessary that we can recognise a good solution if it were presented to us, even if we don’t know how to create a good solution. In other words, we need to be able to formulate an effective fitness function.

NASA ESG evolved antenna.Engineers working for NASA know a lot about physics. They know exactly which characteristics make for a good communications antenna. But the process of designing an antenna so that it has the necessary properties is hard. Even though the engineers know what is required from the final antenna, they may not know how to design the antenna so that it satisfies those requirements.

NASA’s Evolvable Systems Group has used evolutionary algorithms to successfully evolve antennas for use on satellites. These evolved antennas (pictured) have irregular shapes with no obvious symmetry. It is unlikely that a human expert would have arrived at such an unconventional design. Despite this, when tested these antennas proved to be extremely well adapted to their purpose.

Other Examples of Evolutionary Computation in Action

Pre-requisites

There are two requirements that must be met before an evolutionary algorithm can be used for a particular problem. Firstly, we need a way to encode candidate solutions to the problem. The simplest encoding, and that used by many genetic algorithms, is a bit string. Each candidate is simply a sequence of zeros and ones. This encoding makes cross-over and mutation very straightforward, but that does not mean that you cannot use more complicated representations. In fact, most of the examples listed in the previous section used more sophisticated candidate representations. As long as we can devise a scheme for evolving the candidates, there really is no restriction on the types that we can use. Genetic programming (GP) is a good example of this. GP evolves computer programs represented as syntax trees.

The second requirement for applying evolutionary algorithms is that there must be a way of evaluating partial solutions to the problem – the fitness function. It is not sufficient to evaluate solutions as right or wrong, the fitness score needs to indicate how right or, if your glass is half empty, how wrong a candidate solution is. So a function that returns either 0 or 1 is useless. A function that returns a score on a scale of 1 – 100 is better. We need shades of grey, not just black and white, since this is how the algorithm guides the random evolution to find increasingly better solutions.

This is the first in a short series of articles on practical Evolutionary Computation.  The text is taken from the work-in-progress documentation for the Watchmaker Framework for Evolutionary Computation.  The next article will demonstrate how to implement evolutionary algorithms in Java using the Watchmaker Framework.

Further Reading

An Introduction to Genetic Algorithms Introduction to Evolutionary Computing The Blind Watchmaker

Random Number Generators: There should be only one

Posted in Java by Dan on January 11th, 2009

I came across this interesting RNG-related question on StackOverflow the other day. Since I spent a fair bit of time constructing an answer, I thought I’d re-post it here and expand on it slightly.

Basically the questioner wanted to know why the code below produced distinctly non-random results. It generates 1500 values between 0 and 7 and keeps a tally of how often each value occurs. The results were 1310 fives and 190 sixes (the numbers 0, 1, 2, 3, 4 and 7 did not occur even once). The expectation was that the values would be uniformly distributed, with each of the eight values occurring an approximately equal number of times.

int[] vals = new int[8];
for (int i = 0; i < 1500; i++)
    vals[new Random(i).nextInt(8)]++;
System.out.println(Arrays.toString(vals));

The most suspicious part of this code is that it creates and seeds a new random number generator for every iteration. My initial brief answer was the suggestion that the code should use only a single RNG instance. It’s good advice and solves the problem for this particular code but it doesn’t explain why the original version generates only fives and sixes. It’s not like every RNG instance has the same seed – in fact every single one is different (the loop index is used as the seed). Somebody else asked if I knew why the RNGs were behaving the same even though they were seeded differently. Well I didn’t know but I decided to find out.

I was pretty sure it was because the seeds were non-random (I’ve discussed the seeding of RNGs previously). Because the seeds were 64-bit integers in the range 0-1500, the top 53 bits were identical (all zeros) across all of the seeds. Ideally the RNG implementation would be able to overcome this lack of diversity in the seed data, but apparently not (if we repeat the experiment with a better RNG, such as the MersenneTwisterRNG, we do not see such obvious anomalies). To see how this all plays out to generate only fives and sixes requires a detour into the source code of java.util.Random.

When you specify a seed in the constructor it is manipulated as follows:

seed = (seed ^ 0x5DEECE66DL) & mask;

Where the mask simply retains the lower 48 bits and discards the others.

When generating the actual random bits, this seed is manipulated as follows:

randomBits = (seed * 0x5DEECE66DL + 0xBL) & mask;

Now if you consider that the seeds used were sequential (0 – 1499), and they were used once then discarded, the first four seeds generated the following four sets of random bits:

101110110010000010110100011000000000101001110100
101110110001101011010101011100110010010000000111
101110110010110001110010001110011101011101001110
101110110010011010010011010011001111000011100001

Note that the top 10 bits are indentical in each case. This is a problem because we only want to generate values in the range 0-7 (which only requires a few bits) and java.util.Random does this by shifting the higher bits to the right and discarding the low bits. It does this because in the general case the high bits are more random than the low bits. In this case they are not because the seed data was poor.

Finally, to see how these bits convert into the decimal values that we get you need to know that java.util.Random makes a special case when n is a power of 2. It requests 31 random bits (the top 31 bits from the above 48), multiplies that value by n and then shifts it 31 bits to the right.

Multiplying by 8 (the value of n in this example) is the same as shifting left 3 places. So the net effect of this procedure is to shift the 31 bits 28 places to the right. In each of the 4 examples above, this leaves the bit pattern 101 (or 5 in decimal). In the example code 1390 different seeds resulted in this particular 3-bit sequence. The others resulted in the sequence 110 (or 6 in decimal).

If we didn’t discard the RNGs after just one value we would see the sequences diverge. While the first four RNG sequences all start with 5, the second values of each are 6, 0, 2 and 4 respectively. The small differences in the initial seeds start to have an influence.

The Singleton RNG

As this problem illustrates, having multiple RNGs in a single application can be problematic. If the seeds are not completely independent of each other the outputs will not be independent. In this case the seed of one RNG is derived from the seed of the previous RNG (by incrementing it). Everything is much simpler if we only have a single random number generator. Fortunately, java.util.Random, SecureRandom and all of the Uncommons Maths RNGs are thread-safe. A single instance can be accessed from multiple threads without causing problems, so there usually is no need to have multiple RNGs even in complex applications.

There a few ways that you can make sure that you only use a single RNG. You could use the singleton pattern to restrict the number of instances to one but that approach is frowned upon these days. Dependency injection is the more politically correct alternative. Just pass your single RNG instance as an argument to the constructor of each object that needs to use random numbers. A third option is the functional style – passing the RNG as an argument to each method that needs it.

Repeatability in Multi-Threaded Applications

Repeatability is a useful feature of deterministic pseudo-RNGs. If you use the same seed for two RNG instances they will produce the same output sequence. This is helpful for debugging applications because it means you can run your random code multiple times and have it behave identically each time.

It seems that the motivation for multiple RNGs in the original StackOverflow question was to maintain this repeatability in the presence of multiple threads (though that is not immediately obvious from the example). A single RNG won’t be able to guarantee repeatability in this scenario without a little bit of extra work. The reason for this is the uncertainty introduced by the vagaries of the scheduler. You can guarantee the repeatability of the RNG output but you can’t guarantee that individual values will always be picked up by the same threads.

There are a couple of solutions to this problem that don’t involve using one RNG per thread. They both involve decoupling the generation of the random numbers from their consumption. The simplest approach, if you know exactly how many numbers each thread needs, is to pre-generate all of the required random numbers and to pass them to the individual threads in arrays or lists. The numbers are generated in a predictable order but the order in which they are consumed is unspecified (and unimportant because we have partitioned them).

A variation on this approach, for when you don’t know how many numbers are required or when it is impractical to pre-generate all of the values and hold them all at once, is to feed the numbers to the threads using separate queues. The RNG can fill the queues in a round-robin fashion while the threads are consuming them at their own pace.

Lessons Learned