Just in case we didn't have enough to worry about, The Times of London and the U.S. Office of Naval Research brings this to the table:
Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research.
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.Under the "Legal Challenges" heading, the 112-page report has this to say, in part:
To whom would we assign blame—and punishment—for improper conduct and unauthorized harms caused by an autonomous robot (whether by error or intentional: the designers, robot manufacturer, procurement officer, robot controller/supervisor, field commander, President of the United States...or the robot itself?...The law offers several precedents that a robotics case might follow, but given the range of specific circumstances that would influence a legal decision as well as evolving technology, more work will be needed to clarify the law for a clear framework in matters of responsibility…The situation becomes much more complex and interesting with robots that have greater degrees of autonomy, which may make it appropriate to treat them as quasi‐persons, if not full moral agents some point in the future.Click here for the full text of the report.