There's an alife program called DarwinBots where small bots powered by mutating code compete against each other to survive and reproduce.
Given enough time, you'd expect the to develop clever behaviors, but instead they just fuzz-tested the sim and locked in on exploits of bugs or environment settings. They only got a bit more clever when connecting different sims running on different conditions.
Eyes already use different kinds and densities of sensors optimized for either detail and color or movement/edges. I wouldn't expect a single learning method, even after optimizing it to its limits, to be above what two or more layers of different methods could do, especially when trying to avoid exploits like the tank story.
Given enough time, you'd expect the to develop clever behaviors, but instead they just fuzz-tested the sim and locked in on exploits of bugs or environment settings.
Classic A-life! Also, not so different from the spirit of actual biology.
They only got a bit more clever when connecting different sims running on different conditions.
Diversity is very important for evolution on many levels. What many don't realize (especially, I note, evolution deniers) is that the ecosystem as a whole provides a very complex and continually varying epiphenomenal fitness function to any given organism.
If you don't have a sufficiently complex genotype phenotype mapping and the system is not evolvabke (See Gunter Wager's work) the you shouldn't expect more complex phenotype. Understanding a genetic representation is going to be an important step toward open ended evolutionary systems.
> They only got a bit more clever when connecting different sims running on different conditions.
Part of the reason why a lot of these nets are trained with added noise, as well as drop-out (randomly disabling 50% of the hidden neurons, every training step).
Especially the drop-out tactic is particularly effective at preventing "exploits" of the neural net type, which otherwise appear in the form of large correlated weights (really big weights depending on other really big opposite weights to cancel out--it works, but it doesn't help learning).
Either way, adding noisy hurdles helps because exploits are usually edge cases, and noise makes them less dependable, as the region of fitness space very close to an exploitable spot, is usually not very high-ranking at all (which is why you don't want your classifiers ending up there).
Darwinbots uses actual computer code to control the robots. This makes it really hard for evolution to work with. Most mutations just break the code, and very very few mutations create anything interesting. And the simulation is too slow to explore millions of different possibilities to make up for the difficulty. What makes it worse is they are usually asexual.
However I think that's ok. Most of the fun with darwinbots is programming your own bots. They used to be (still are?) competitions where people wrote their own bots and had them compete under different conditions.
Given enough time, you'd expect the to develop clever behaviors, but instead they just fuzz-tested the sim and locked in on exploits of bugs or environment settings. They only got a bit more clever when connecting different sims running on different conditions.
Eyes already use different kinds and densities of sensors optimized for either detail and color or movement/edges. I wouldn't expect a single learning method, even after optimizing it to its limits, to be above what two or more layers of different methods could do, especially when trying to avoid exploits like the tank story.