Greetings appreciated @pedrobrito2004
The routines of AI-supported systems are based on a compendium of human actions and reactions, these are stored and "taught" to the system. Then the system is able to decide and choose the most appropriate option.
So it is humans (so far) who create AI. Therefore we "still" have the possibility to limit it. I said "still" because the time may come when the systems themselves will be individualized and dispense with 100% of human intervention.
For this reason it is intended to establish these ethical principles.
The weakness will be that they are only "principles" and perhaps there will be people who will not be governed by them.
Your friend, Juan.
Yes, you are right and I share the idea that a friend once said: Principles are not laws, they are rather something similar to an expression of good wishes or favorable intensifications to one thing.
Exact!
And in this case, this represents a great weakness, a weak point.
Even if laws were made, there is no guarantee that they would be respected.