The risks

In the long terms, the questions is : « What will happen if the AI succeeds and AI becomes better that humans ? »

droids

It’s about the cognitive task. A system could potentially undergo the self-improvement and triggering a real revolutionary. The news technologies, such a super-intelligence, could surpass the human intelligence and so become a threat.

Further examples illustrate a dangerous way that could take the AI. The most important is how the IA could be autonomous and so we loose the control or what is the purpose of creating destructing systems.

Of course, the AI accomplish its goals but it doesn’t matter of the different between the good and the bad. We have to be carreful regarding the use of AI and no learn to use it as a weapon.

VIDEO Nick Bostrom on Artificial Intelligence and Existential Risks