Genetic Algorithm Image Approximation Using EvoJ

In this article I will tell you how you can apply the genetic algorithm to approximate images with polygons. As in my previous articles, for this purpose I will use my own EvoJ framework , which I already wrote about here and here .

Choosing a solution description method

Following the traditional way of solving problems using EvoJ, first we will choose how we describe the solutions for our problem. A straightforward solution is to describe the approximation as a simple list of polygons:

public interface PolyImage {

    List<Polygon> getCells();

This approach has the right to life, we will be able to select a satisfactory picture sooner or later. Most likely late, since this approach reduces the efficiency of crossing good decisions. And that's why.

For simplicity, imagine the following picture.

Suppose we have two good solutions in the gene pool, each of which differs from the ideal one by one test site.

Pay attention to one feature: in both cases, polygon number 0 ideally matches one of the image polygons, while the other lies somewhere on the side. It would be nice to cross these two solutions in such a way that the first two polygons from both solutions appear together, but this is not possible - EvoJ maps all variables to a byte array and works with bytes when crossing, without changing their order.

The crossover process is explained in the following figure.

Thus, two elements with the same index cannot appear in the resulting solution.
The result of crossing will most likely look like this:
Which will greatly affect the efficiency of crossing, significantly slowing down evolution.

You can get around the problem by “correctly” sowing the original population - evenly distributing the polygons across the image area, so that the location of the polygon in the list is consistent with its geometric location in the figure. In this case, it will be necessary to limit the mutation so that the polygons cannot “creep” too far.

A more advanced approach is to initially divide the image into cells, with each cell having its own list of polygons. In the form of a Java interface, this is described as follows:

public interface PolyImage {

    Colour getBkColor();

    List<List<List<Polygon>>> getCells();

The external list defines the rows of cells, the next by nesting is the columns, and, finally, the internal list contains the polygons located in the corresponding cell. This approach automatically provides an approximate correspondence between the position of the polygon in a byte array and its geometrical location in the figure.

We describe each polygon as:

public interface Polygon {

    List<Point> getPoints();

    Colour getColor();

    int getOrder();

Here, the order property determines the global polygon rendering order. The polygon color will be described as:

public interface Colour {

    @Range(min = "0", max = "1", strict = "true")
    float getRed();

    @Range(min = "0", max = "1", strict = "true")
    float getGreen();

    @Range(min = "0", max = "1", strict = "true")
    float getBlue();

    void setRed(float red);

    void setGreen(float green);

    void setBlue(float blue);

In other words, just like the three color components, you can still add an alpha channel, however, we will draw all the polygons with 50% transparency, this will reduce the number of indifferent mutations. The purpose of the annotations here, I hope, is obvious. If not, take a look at my previous articles, there this issue is dealt with.

Polygon points will be described as:

public interface Point {

    int getX();

    int getY();

In this case, the coordinates X and Y will be considered relative to the center of the cell to which the polygon belongs. Please note that annotations regulating the permissible values ​​of variables are present only in the color description. This is due to the fact that the permissible values ​​of the coordinates depend on the image size and the configuration of the cells, and the permissible values ​​of the color component are predefined things. In addition, an annotation on two internal lists in the interface PolyImage cannot be hung at all.

To set all the necessary parameters, we will use a mechanism called Detached Annotations.

So, let's move on to programming. Sources used in this example can be found here. . Much of the above code repeats what was written in my previous articles, a lot is not related to the genetic algorithm as such. Therefore, I will focus on key points, and the remaining parts are commented on in the code.

Configuring EvoJ with Detached Annotations

        final HashMap<String, String> context = new HashMap<String, String>();

        context.put("cells@Length", ROWS.toString());
        context.put("cells.item@Length", COLUMNS.toString());
        context.put("cells.item.item@Length", POLYGONS_PER_CELL.toString());
        context.put("cells.item.item.item.points@Length", "6");
        context.put("cells.item.item.item.points.item.x@StrictRange", "true");
        context.put("cells.item.item.item.points.item.x@Min", String.valueOf(-widthRadius));
        context.put("cells.item.item.item.points.item.x@Max", String.valueOf(widthRadius));
        context.put("cells.item.item.item.points.item.x@MutationRange", "20");
        context.put("cells.item.item.item.points.item.y@StrictRange", "true");
        context.put("cells.item.item.item.points.item.y@Min", String.valueOf(-heightRadius));
        context.put("cells.item.item.item.points.item.y@Max", String.valueOf(heightRadius));
        context.put("cells.item.item.item.points.item.y@MutationRange", "20");
        context.put("cells.item.item.item.order@Min", "0");
        context.put("cells.item.item.item.order@Max", "1000");

Here we create a context - Map similar to a properties file, where the property name corresponds to the path to the property of the object in a notation similar to that accepted in BeanUtils , except that the list items are indicated by the keyword item .
Thus, the name cells describes a property of cells our root interface PolyImage . A line cells.item is the elements of this list, which in turn are lists described by a line cells.item.item , etc.
After the property name and sign @ Specifies the name of the detached annotation, which is similar to the name of a regular annotation. It is mandatory to specify the length of the list so that EvoJ knows how much space to reserve in the byte array.
Abstract cells.item.item.item.points@Length sets the number of points in the polygon (trace the path - property cells , three nested lists, the property of points the elements of the nested list itself. The
boundaries of the values ​​of the coordinates of the points, the radius of their mutations, the limits of the values ​​of the order property are set in the same way.
Next, we use the filled context when creating the gene pool.

Creating Fitness Features

As a fitness function, we choose the sum of the squares of the deviations of the selected image from the original, with a minus sign - since the smallest amount corresponds to the best solutions. The fitness function is implemented in the class ImageRating ; let us dwell on its key points.

As in the examples from previous articles, the class inherits from the helper class AbstractSimpleRating :

public class ImageRating extends AbstractSimpleRating<PolyImage> 

and implements the abstract doCalcRating method as follows:

    protected Comparable doCalcRating(PolyImage solution) {
        BufferedImage tmp = drawImage(solution);
        return compareImages(originalImage, tmp);

Here, the function drawImage trivially draws all the polygons according to their cell, order, color, etc. We will not dwell on it in detail - nothing related to the genetic algorithm is contained in it.

The function compareImages performs pixel-by-pixel comparison of images. Let's take a closer look

  private Comparable compareImages(BufferedImage originalImage, BufferedImage tmp) {
        long result = 0;
        int[] originalPixels = originalImage.getRaster().getPixels(0, 0, originalImage.getWidth(), originalImage.getHeight(), (int[]) null);
        int[] tmpPixels = tmp.getRaster().getPixels(0, 0, tmp.getWidth(), tmp.getHeight(), (int[]) null);

        for (int i = 0; i < originalPixels.length; i++) {
            long d = originalPixels[i] - tmpPixels[i];
            result -= d * d;

        return result;

As you can see, the function is quite trivial - it takes rasters of the original and freshly drawn images and compares them element by element, simultaneously adding (with a minus sign) the squares of the differences.


In order to achieve the result faster, it is necessary to choose the optimal mutation parameters:
  • the likelihood that the decision will undergo a mutation
  • the probability of mutation of each variable within the solution
  • radius of change - the maximum distance that a point can shift during a mutation
  • solution lifetime (not entirely related to mutation, but also affects the rate of convergence)

Choosing a strategy is not trivial. On the one hand, a strong mutation allows you to quickly embrace the space of solutions and “grope” the global extremum, but on the other hand, it does not allow it to slide into - solutions will constantly slip through it. The long life of the solution insures against a deterioration in the value of the fitness function, but it slows down the overall progress and helps to slide into a local extreme instead of a global one. It is also necessary to decide what needs to be mutated with a higher probability - the points of the polygons or their colors.
In addition, after a while, the chosen strategy exhausts itself - the solutions found begin to fluctuate around the found extremum, instead of sliding into it. To cope with this problem, strategies need to be changed. The general idea in this case is that the level of mutation decreases over time, and the lifetime of successful decisions grows. The mutation strategies used in the example are empirically selected.

Given all of the above, the main program loop is organized as follows:
  1. perform 10 iterations of the genetic algorithm
  2. take fitness function value from the best solution
  3. determine if it's time to change strategy
  4. change strategy if necessary
  5. save image to temporary file
  6. update UI

Having launched the program, in about 15 minutes (depending on the power of your machine) you will find that the selected image already vaguely resembles the original, and after an hour or two you will achieve a result close to what is given at the beginning of the article.

Further improvement of the algorithm

Progress can be significantly accelerated by applying the following optimizations:
  • The approximation is performed in the Lab space instead of RGB, it is characterized by the fact that the Cartesian distance between the points in the color space is approximately consistent with the difference perceived by the eye. It will not add the number of iterations per second, but it will direct evolution in the right direction.
  • create a mask of the importance of image areas - a black-and-white image with highlighted areas of interest, which will then be used in calculating the fitness function, this will allow the evolution to be concentrated on what really is of interest.
  • for images use VolatileImage. On my machine, drawing on VolatileImage is 10 times faster than drawing on BufferedImage. True, then the result has to be converted to BufferedImage anyway to get the colors of the pixels, this leads to a significant drop in performance, but still the final result is 3 times better than just drawing polygons on the BufferedImage.
  • choose the optimal mutation parameters at different stages. This is not an easy task, but even here a genetic algorithm can help. I conducted experiments where the mutation parameters were variables, and the fitness function was the average rate of error reduction per 100 iterations. The results are encouraging, but significant computational power is required to solve this problem. Perhaps in one of the following articles I will analyze this problem closer.
  • start several independent gene pools and make evolution in them independently, at certain intervals to interbreed individuals from different gene pools. This approach is called the GA island model, i.e., evolution proceeds as it were on islands isolated from each other, crosses between individuals from different islands are extremely rare.
  • Sow the image with polygons gradually: first place 1-2 polygons in each cell, allow them to “creep”, then add another 1-2 polygons to the places where the image is most deviated from the original and only newly added polygons evolve, so repeat until the limit of the number of polygons in the cell is reached, and then start the evolution of the entire image, as described in the article above. This approach leads to the most accurate close approximations.


So, we looked at an example of how to use EvoJ to solve the image approximation problem. It is worth noting that the approximation of images by a genetic algorithm is more of an academic or aesthetic interest than a practical one, since a similar result can be achieved with other specially sharpened image vectorization algorithms.

Next time I’ll talk about using a genetic algorithm to generate design ideas.

UPD1: Since the link to the sources given in the text of the article is not striking, I duplicate it here
UPD2: At the request of the workers I bring slides from my first article
Source image Matched image
image image
image image