Lately, whilst playing Zelda TotK (works for BotW as well), I was thinking that a good test to see if you have AGI would be letting it solve all the shrines. They require real world knowledge, sometimes rather "deep" logic and the ability to use previously unseen capabilities. Of course the AGI should not be able to have a million tries at it, RL style. Just use the "shrine test" as a regular test set. I believe one would have a pretty nice virtual proxy for a general intelligence test.
From the article, I find it strange that AGI often de facto implies "super intelligence". It should be 2 distinct concepts. I find that GPT-4 is close to a general intelligence, but far from a super intelligence. Succeeding at just general intelligence would be amazing, but I don't believe it means super intelligence is just a step away.
This also brings me to a point I don't see discussed a lot which is simulation (NOT in "we live in a simulation" sense). Let's say I have AGI, it passes the above mentioned shrine test, or any other accepted test. Now I'd like to tell it "find a way to travel faster than light" for example. The AGI would first be limited by our current knowledge, but could potentially find a new way. In order to find a new way it would probably need to conduct experiments and adjust its knowledge based on these experiments. If the AGI cannot run on a good enough simulation, then what it can discover will be rather limited, at least time-wise most likely quality wise. I'm thinking this falls back to Wolfram's computational irreducibility. Even if we managed a super general intelligence, it will be limited by physics of the world we live in sooner rather than later.
The reason AGI is often equated with runaway intelligence is that once you get to a space where your computer can do what you do, it can improve itself instead of relying on you to do it. That improvement then becomes bounded by processing power and time, and is constantly accelerating.
If find it amazing how GPT-4 is good at even abstract "reasoning" as long as you present the problem as a story. Some problems can't plausibly be presented as a story ofc, also there is no way to automatically convert something into a story.
So you are doing all the work by "coding" in a weird language that actually takes more brain power than regular high level (as in direct business logic, not C or python) programming languages?
you "coded" in quotes, as in you had to think a lot about how to translate the problem to the machine. you are pretty much a compiler for ai or something.
getting a machine to do that sort of real-time spatial reasoning may well be harder than getting it to tell you the meaning of life or whatever. brains are inextricable from the evolution of directed locomotion. several species of sessile tunicates begin life as a motile larva that reabsorbs a significant portion of its cerebral ganglion once it settles down. BDNF is released in humans upon physical activity. the premotor cortex dwarfs wernicke's area. and no "AI" development that's been hyped in the past decade as intelligent could be usefully strapped to a boston dynamics dog.
From the article, I find it strange that AGI often de facto implies "super intelligence". It should be 2 distinct concepts. I find that GPT-4 is close to a general intelligence, but far from a super intelligence. Succeeding at just general intelligence would be amazing, but I don't believe it means super intelligence is just a step away.
This also brings me to a point I don't see discussed a lot which is simulation (NOT in "we live in a simulation" sense). Let's say I have AGI, it passes the above mentioned shrine test, or any other accepted test. Now I'd like to tell it "find a way to travel faster than light" for example. The AGI would first be limited by our current knowledge, but could potentially find a new way. In order to find a new way it would probably need to conduct experiments and adjust its knowledge based on these experiments. If the AGI cannot run on a good enough simulation, then what it can discover will be rather limited, at least time-wise most likely quality wise. I'm thinking this falls back to Wolfram's computational irreducibility. Even if we managed a super general intelligence, it will be limited by physics of the world we live in sooner rather than later.