Perfect information is not the case when the AI is reading pixel colors off the screen and pressing keys on the keyboard. This is very like the real-world situation.
As for goals, they are indeed better defined in games. But we can create artificial goals attached to the real life:
- goal: GPS sensor reports specific position
- goal: the camera reports a recognised object in a specific position
- goal: a button was pressed (signal reaching the AI e.g. through the cloud)
You'll find pretty quickly that the exact techniques that worked so well in learning to play Atari games very often fail spectacularly when you have to introduce goal-steering.
Reinforcement learning turns out to be fantastically clever at finding really stupid solutions if you give it the tiniest opening to do so. You put a camera in a room to provide feedback for an agent to learn to move an object closer to the camera, and it will happily learn to knock the camera over.
To be fair to our AI brethren, this is true of Humans too. I watch people game metrics every single day, and many of them fail spectacularly when presented with real world problems outside of their experience.
The problem is that we generally rely on human intelligence to fill in gaps in specs that are really not stupid for a human. AI will exploit gaps that a human would basically say aren't there until they see evidence to the contrary.
So we can say that's a bad spec if we like, but what's the answer that leads to? I don't need an AI if I can just declare all users must be really good programmers spending their days writing unambiguous specs. That wasn't really the goal of the AI though.
Unfortunately, when gaming metrics has real world consequences (and this is of course the whole point of those metrics, assuming they're not gamed), then things like 2008 happens, and tens of thousands of people lose their homes.
I think you are misusing the term "perfect information." In this case, chess is a real world game where both players have perfect information. That is, neither player can keep a secret of where any of the pieces have moved.
So, the complexities of how the AI is taught to interact ultimately don't matter. It may have a lot of effort to parse the visual of the board to get the perfect information, but the game is defined as one of perfect information.
True. I think my point was more about games with non-trivial rules (or many degrees of freedom). For example going from chess/go to turn-based video games like civ, to starcraft. Usually it involves vastly more possibly positions in time and space.
Even then, it still reduces to a brute-forceable game with finite, explorable states; 'just' with an extra layer of (granted, quite interesting and technically impressive) parsing. We don't know what the case is for reality.
And yes, we can do our best to turn real life in to a game, but all such models leak pretty badly, and the leaks tend to cause much more fundamental instability.
As for goals, they are indeed better defined in games. But we can create artificial goals attached to the real life: - goal: GPS sensor reports specific position - goal: the camera reports a recognised object in a specific position - goal: a button was pressed (signal reaching the AI e.g. through the cloud)