The goal of this laboratory is to evolve a neural controller for a simple differential wheel drive cart robot (similar to the e-puck) that has to navigate in an environment as fast as possible while avoiding obstacles (for example, walls). Evolution is performed in the RoboGen Online platform, and tests of the best controller found by evolution can be done either in simulation or on a real, 3D printed version of the robot.
By the end of this laboratory you should have learned something about:
- How to design fitness functions.
- How the parameters of the genetic algorithm (GA) influence evolution.
- How to test whether or not the solution found through evolution can be generalized.
- Why evolution is so cool (look out for strategies you did not think of, or ways that evolution seems to “cheat” your fitness function).
- Optionally: How to load your controller onto the Arduino board of a real RoboGen robot.
- Optionally: Why transferring to reality may be difficult.
To get started, visit http://robogen.org/app
At first you will see some robots. Clicking on these will run a demo simulation. Try these out first to familiarize yourself with the visualizer. Be sure to close the visualization tab when you are done to free up resources.
Then, to evolve, click on the “Advanced” tab and then click “Start an evolution”.
Tip: if the software is having some problem, try refreshing the page.
Warning: all data is being saved to a virtual filesystem within your web browser. If you want to save anything for later use, be sure to download it to your home directory!
Neural Controller. The robot is controlled by a neural network which transforms the sensory IR inputs received from the sensors into motor commands for the left and right wheels of the robot. Inputs are scaled to fit the range [0,1] and the neuronal transfer function is chosen to be the logistic (sigmoid) function.
Genetic Algorithm. A genetic algorithm is used to evolve the synaptic weights of the described neural controller. The synaptic weights, which represent the genes of the individuals, are coded using floating point values. When put together, the genes form a genome.
A population of individuals is evolved, using tournament parent selection, one-point crossover, weight mutation and either “mu+lambda” or “mu,lambda” replacement. The genomes of the first generation are initialized randomly in the range [−3,3]. Each individual is evaluated based on the fitness function defined in the examples/obstacleAvoidance.js (more on this below).
In the case of “plus” replacement, the mu parents and lambda children are grouped together and ranked according to their measured fitness values, the top mu individuals are copied to the new population (elitism). This allows evolution to make sure that good solutions are not lost because of mutation or crossover. On the other hand, in the case of “comma” replacement, the mu parents are discarded, the lambda children are ranked, and the top mu of these are copied to the new population (lambda>=mu). This replacement strategy favors exploration.
After this step, a new set of lambda children are generated from the mu survivors with parents chosen by a tournament competition. With a certain probability two parents will be chosen (each by its own tournament) and one point crossover is applied to the pair, otherwise a single parent is chosen. In either the case the new offspring (either a clone or created by recombination) will have each of its genes mutated with a certain probability. In the case of mutation that gene will have a number drawn from a Gaussian distribution (w/ mean 0) added to it.
All of these parameters: choice of “plus” or “comma” replacement, the values for mu and lambda, the tournament size, the crossover probability, the mutation probability, and the mutation standard deviation are all configurable.
Fitness Function (aka Scenario) Definition
Exercise 1 – Fitness functions
Here you will design and implement a fitness function that will reward robots for navigate in an arena as fast as possible and without touching any walls. To do you will modify the code in examples/obstacleAvoidance.js.
Explanation: After each simulation step we collect various information about the robot’s behavior in
afterSimulationStep, and then at the end of the simulation, the fitness for that simulation is computed in
getFitness will return the min fitness across all simulations (so, for example, if we evaluate a robot in multiple environments it is only as good as its worst case).
Before trying to evolve with a given fitness function, make sure it works by just running the simulator. If you see a value that makes sense then you are good to move onto evolution.
After the evolution is finished you can download the BestAvgStd.txt file from the results directory and run plot_results.py to create a fitness graph (requires having Python with Pylab on your computer; we hope to integrate this into the web application soon).
- Did you get the desired behavior with your fitness function?
- What strategies did you observe during the evolution and why were they good/bad in terms of fitness?
- What is the most implicit fitness function you can find?
Exercise 2 – Genetic Algorithm parameters
Re-run the above experiment, each time changing a single one of these parameters in examples/evolConfObstacleAvoidance.txt :
mu, lambda, replacement, tournamentSize, pBrainMutate, brainSigma, pBrainCrossover
- For each experiment, download and have a look at the fitness graph. Do you see a difference with respect to your initial parameters?
Exercise 3 – Generalization
We will now test the best individual obtained through evolution. To do so, go to the Advanced tab and modify examples/confObstacleAvoidance.txt
You can increase the time your robot is being tested by changing the simulationTime parameter. Do not change the other parameters. Save this file and then choose “Start a Simulation”. Change the robot description file to your best individual, i.e. myExperiment/GenerationBest-50.json and run.
You can also change the starting position of the robot by modifying examples/startPos.txt and add obstacles by modifying examples/simpleArena.txt
For details on the contents of these files, see Simulator settings
- If you increase the simulation time, does the robot continue to perform well?
- When you move your robot to a different start position, does it still work?
- When you add obstacles in the environment, does your controller still work?
- If your controller didn’t generalize to these two tests, what could you do to fix the problem?
Exercise 4 (optional) – Transfer to the real robot
When you have found a good controller that you wish to test on a real robot, click the “Generate log files” option in the new simulation window and give it a directory to write to. Then download the NeuralNetwork.h file from this directory and send it to one of the assistants. (If you are doing these exercises outside of a class, please refer to Building Your Robot).
- Do you notice any difference between the robot behavior in simulation and reality? Explain why.
Next Exercise: Body Plans (Morphologies)