There is a pattern that formed in recent years, where, every once in a while there is another major achievement in AI (especially by DeepMind or OpenAI), and a lot of AI skeptics go out of their way to explain how, despite this achievement, we are still very very far from AGI. Now, as far as I know (and probably, as far as anyone knows), we are still far from AGI (maybe). However, I am getting the feeling that no matter how impressive the achievement, as long as it’s still obviously not AGI, the same skeptics will express their skepticism more or less equally strongly. Which is to say, whatever they are doing, it is not a useful way of making intelligent guesses about future AI progress.
Maybe I am uncharitable. Maybe there are some milestones that, despite being obviously insufficient for AGI in themselves, will make most of everyone go “ah, now AGI might be not that far”. But at the moment, I’m honestly not sure what milestones would those be.