if we want a "brain" with the number of neurons and synapses of a human brain, that takes forever to train, we already know how to do that. We just need a man, a woman, and 9 months.
Geoff Hinton comments on a Reddit AMA that "The brain has about 10^14 synapses and we only live for about 10^9 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 10^5 dimensions of constraint per second."
That sounds to me like humans don't take "forever to train" and definitely don't learn from "big data" compared to the size of data we feed into a small machine neural network. Brains must already have a lot of shortcuts built-in.
I was just being glib about that. "Forever" is just hyperbole, but the 10+ some odd years it takes to go from birth to useful for most intellectual tasks is a pretty long-time in relative terms.
Brains must already have a lot of shortcuts built-in.
Oh absolutely. My point is just that there's no reason for us to not pursue "shortcuts" - as opposed to trying to build an ANN that's big enough to essentially replicate the actual mechanics of a real brain.
To extend this overall point though... it may be that as we learn newer/better algorithms and techniques we find out that you can actually make an ANN that would, for example, learn to do logical reasoning. And it might do so without need to use anywhere near the number of neurons and synapses that a real brain uses. But until such a time as it becomes apparent that this is likely, I think it's a good idea to continue researching "hybrid" systems that hard-wire in elements like various forms of symbolic/logical reasoning and anything else that we at least sorta/kinda understand.
We are often deceived by the fact that Human infants are optimised for plasticity (I know this is arguable - but it's a reasonable theory) and for their brain to get through a bipeds birth canal (and subsequently grow). Look at lambs in contrast (I've been on a sheep farm in Scotland for a couple of weeks so I've had the opportunity!) Lambs stand up about 3 to 10 minutes after birth (or there is a problem). They walk virtually immediately after that, they find the sheep's udder and take autonomous action to suckle within an hour (normally) and follow their mothers across a field, stream, up a hill over bridges as soon as they can walk. Within a week they are building social relations with other sheep and lambs and within three weeks they are charging round fields playing games that appear pretty complex in terms of different defined places to run up to and back and so on.
This kind of rapid cognitive development argues strongly (IMO) against the kind of experimental/experiential training that a tabula-rasa nn approach would indicate.
Human plasticity and logical reasoning are the apex of other processes and approaches, I think that because we have so much access (personally through introspection and socially via children) to models of theses processes, and the results are so spectacular and intrinsically impressive.
I used to go to the SAB conferences in the 90's, they're still going, but somewhat diminished I think. This was where the "Sussex School" of AI had it's largest expression - Phil Husbands, Maggie Boden and John Maynard Smith all spoke about the bridges between animal cognition and self organising systems. I am pretty sure that they were all barking up the wrong tree (he he he) but there was and is a lot of mileage in the approach.
Geoff Hinton comments on a Reddit AMA that "The brain has about 10^14 synapses and we only live for about 10^9 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 10^5 dimensions of constraint per second."
That sounds to me like humans don't take "forever to train" and definitely don't learn from "big data" compared to the size of data we feed into a small machine neural network. Brains must already have a lot of shortcuts built-in.
(comment is from https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama... )