Yeah, this is in 'flies like a plane, not like a bird' territory. But I think it's closer than you think.
The systems do learn and have improved rapidly over the last year. Humans have two learning modes - short-term in-context learning, and then longer-term learning that occurs with practice and across sleep cycles. In particular, humans tend to suck at new tasks until they've gotten in some practice and then slept on it (unless the new task is a minor deviation from a task they are already familiar with).
This is true for LLM's as well. They have some ability to adapt to the context of the current conversation, but don't perform model weight updates at this stage. Weight updates happen over a longer period, as pre-training and fine-tuning data are updated. That longer-phase training is where we get the integration of new knowledge through repetition.
In terms of reasoning, what we've got now is somewhere between a small child and a math prodigy, apparently, depending how much cash you're willing to burn on the results. But a small child is still a human.
The systems do learn and have improved rapidly over the last year. Humans have two learning modes - short-term in-context learning, and then longer-term learning that occurs with practice and across sleep cycles. In particular, humans tend to suck at new tasks until they've gotten in some practice and then slept on it (unless the new task is a minor deviation from a task they are already familiar with).
This is true for LLM's as well. They have some ability to adapt to the context of the current conversation, but don't perform model weight updates at this stage. Weight updates happen over a longer period, as pre-training and fine-tuning data are updated. That longer-phase training is where we get the integration of new knowledge through repetition.
In terms of reasoning, what we've got now is somewhere between a small child and a math prodigy, apparently, depending how much cash you're willing to burn on the results. But a small child is still a human.