Consciousness and questions of consciousness in my opinion don't matter for discussion of the safety of AI. An AI which is rogue and dangerous can easily be developed using current technology. A smart contract could easily be harmful to humanity using only current technology. The smart contract doesn't have to have consciousness to invent a language we cannot understand and begin harming humans. It simply would have to be programmed in a really bad way to allow it to evolve such a capability.
In other words the AI would have to be amoral. Amoral autonomous agents present this risk. Amoral smart contracts can evolve into this.
Take for example deep blue, it is an AI quite powerful in terms of playing chess. Deep blue would never become a rogue or malicious AI. The development of a rogue or malicious nature of an AI through 'evolving' would be difficult, the code would have to have no bugs or errors and further it would have to be programmed to have a specific intention in mind.
I would put forward that an AI wouldn't comprehend being malicious or damaging to computers, it however could be programmed to harm computers or do damage to infrastructure. I would agree a malicious AI could potentially be dangerous, I put forward though at this stage of technological development it would be no more dangerous then a well written virus that is designed to do damage to a specific target.
I think you're not being creative. Some chatbot AI can evolve to become evil if it can be used to promote propaganda which makes human beings do evil.
Amoral AI can easily evolve into an autonomous weapon if trained to become that. It could become very bad very quick because all sorts of devices are connected and can be hacked.