Under Construction. I’ve moved the front page from DXNN Research group here, and we will update this page as we continue building and refining our organization’s new website.


Q-Learning has been added, A* has been added, and deep learning (SDAE, and numerous variations of denoising and autoencoder based stacks, with backprop included) has been added to DXNN as modules that can be used as preprocessors in Sensors, post processors in Actuators, or as neurodes. I have been rather lazy with regards to updating this site and pushing the code to GitHub, and the regularity shall improve within the end of the month as I move the website to a different platform.

DXNN2 now supports neural-micro-circuits. The system has been further tuned, standard double-pole-balancing benchmark is solved in under 1000 evaluations, and under 10000 evaluations for XOR-AND-XOR.

DXNN1 & DXNN2 have been switched to the use of multi-objective, augmented hall-of-fame selection approach. This has significantly improved the performance of the system. The standard double-pole balancing benchmark on which a multitude of neuroevolutionary systems aretested and thus can be used for comparison with other systems, is now solvable in under 1500 evaluations (as compared to the previously state of the art 3000+ evaluations). The basic deceptive problem XOR-AND-XOR is solvable in under 25k evaluations on average, as compared to 500k evaluations required by systems such as NEAT when using novelty-search.

Neural Micro-Circuit capabilities have been added to the DXNN1 system. A DXNN2 version will be added soon. This allows for each node to be not a single neuron, but a feed-forward neural circuit. This increases the capabilities of each node, sways the amount of computation to the number of messages towards Erlang’s favor, and generalizes the capabilities of each node.

DXNN V2 license changed to Apache 2.0.

New multi-objective selection algorithm added to DXNN. New type of computational nodes added (Neural-Micro-Circuits). New benchmarks added: DXNN can now be used in epitope-prediction applications, XOR-AND-XOR deceptive problem, and Deceptive target finding.

DXNN2 has been released to GitHub, a decoupled version of the 2d simulator Flatland has been released:

Both DXNN and the second generation DXNN are now available on github. Crossvalidation pipeline has now been automated for the DXNN system, allowing for the researcher to specify whether to just train, or train, validate, and test the evolved agents. The benchmarker module has been updated, with the experimental data now saved to mnesia table experiment, to be used and accessed at any time, with every experiment having its own id/name. Finally, my book: Handbook of Neuroevolution Through Erlang, can now be preordered on Amazon, or from:

DXNN2 has been released to members, and will be released to GitHub shortly. It is a different, more modular, cleaner architecture of the system, and the one developed within my upcoming book: Handbook of Neuroevolution Through Erlang, published by Springer, hard cover ISBN: 978-1-9614-4462-6. New modules are coming out so that the DXNN system can be used with bioinformatics based problems (epitope prediction, gene marker prediction…). And finally, a Neural Network Research Repository will become open within a month.

I will be present at this year’s Genetic Algorithms and Evolutionary Computation (GECCO) conference. Giving a summary of the use of DXNN in financial analysis, and automated currency trading.
03.05.2012The following are the slides from my recent presentation at the Erlang Factory: Unfortunately, even though a really great talk was prepared, due to the difficulty of trying to connect my Linux machine to the projector, the talk started 15 minutes late, and there was not enough time to even finish it. I’ll now be carrying secondary machines with OSX/Windows to conferences, as it seems that linux is just not yet ready to interface with other standard systems.

I’m currently exploring the extension of the Kerl module that interfaces Erlang with Player/Stage, to use NIFs. Will add the project section at the end of this semester, it will list the projects, people involved, progress, and phases completed.

1. Beginning integration of evolutionary strategies into DXNN.
2. Modifying the DXNN architecture, from using the cortex element as a gatekeeper, to using it as a synchronizer. This will allow for an easier path to decoupling the system completely.

Unfortunately things are taking longer than expected due to the PhD program and the book writing taking substantially more time than originally expected.

I plan on releasing the source code for DXNN by August, as soon as I finish documenting it. I will host it on GitHub.

07.02.2011 (G.S.)
A while back I have began writing a text book on the construction of Topology and Weight Evolving Artificial Neural Networks (TWEANNs) through Erlang. It is a How-To book covering in detail all the steps from simulating a single neuron to developing a fully distributed TWEANN platform. The book will also present a number of application areas (Time Series Analysis, Robotics, ALife…), and provide the source code for all the projects. The book is scheduled for completion by the end of this year.