I see our creation of strong AI as in many ways inevitable and ‘natural’, assuming our species is not rendered incapable by some kind of catastrophe beforehand. And yes, I suspect strong AI could mean the extinction of our species.
I think the question is really whether we should be fearful of this or not. After all, many of us voluntarily choose to create replacements for ourselves for no reason other than that we can. I suspect many of you have children?
Anyway, you should check out the book ‘Superintelligence’ by Nick Bostrum (an Oxford University philosopher specialising in existential risk), it covers many of the ideas in your post in great detail, and is a very enjoyable read.