- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These are the laws that may one day save us from a Matrix/Terminator style situation, the rules that all artificial intelligence must be bound to, with some possible extra stipulations to prevent I.Robot happening too. Despite the fears of luddites who still beat their phones with rocks hoping for the fire that might cook their freshly slain microwavable pasta, science marches on with an army of entirely hypothetical robots at its back with only the purpose of discovery, and also having a cool butler who makes drinks and you don’t even have to pay him. (more…)