None of that has any chance of happening in the near future.
Machine learning takes a huge amount of computation. In particular, while a larger capacity networks become more powerful, it requires exponentially larger networks. For example, Microsoft already admitted that while GPT-4 performs well, it is too computationally expensive to deploy at a large scale. Any AI with superhuman levels of intelligence would require so much compute power that it would be easy to detect and shut down: you could literally pull the plug on it. This might not be true forever, but it will take many advancements in both ML training and hardware to change this in any significant way.
The training advancements might be accelerated by the AI itself, but at the same time there will be diminishing returns on each advancement. There will be a limit on how much intelligence you can get from a certain amount of hardware and power. Hardware advances might stretch further, but are going to be much slower and much more dependent on human cooperation. So it's not likely that AI suddenly and unnoticed jumps to superintelligence.
Also I'm not certain we're all that close to AI actually becoming intelligent. While the improvements in image recognition and language processing are very impressive, AI's ability to reason is still very weak. An LLM can produce good prose, but if it's writing about two persons, it's very likely to mix them up, because it has no mental model of the world it is writing about.
The extinction of all Earth-born life sounds highly unlikely: if a super-intelligent AI has no sense of self-preservation, it would be easy to get rid of. If it does, it wouldn't eliminate humans when it depends on human activity to keep the infrastructure that hosts the AI running. By the time AI and robots no longer depend on humans at all, I'd argue that they have become a new life form.