Thursday, January 9, 2020

The Case for Killer Robots By Robert J. Marks

A Moral Argument for Killer Robots: Why America's Military Needs to Continue Development of Lethal AI

Doomsday headlines warn that the age of "killer robots" is upon us and that new military technologies based on artificial intelligence (AI) will lead to the annihilation of the human race. In his new book, The Case for Killer Robots: Why America's Military Needs to Continue Development of Lethal AI, artificial intelligence expert Robert J. Marks investigates the potential military use of lethal AI and examines the practical and ethical challenges. 

This short monograph is published in conjunction with the Walter Bradley Center for Natural and Artificial Intelligence and the Center is making if freely available as a digital book at the Mind Matters website. Physical copies are available through

In The Case for Killer Robots, these questions are answered:

·    Were AI weapons used in the U.S. conflict with Iran? 

·    Is the use of autonomous AI weapons new?  

·    How could AI have been used by Iran to disrupt the U.S. operations against Iran?

·    The UN Secretary General proposed a ban on autonomous AI weapons. Will this help?

·    Is it easy to make killer robots? 

·    Will computers ever take over? Is Skynet from the "Terminator" movies possible with future AI?

·    How do high tech weapons win, shorten and prevent war?

·    What do we learn from history about the role of high technology like AI in warfare?

·    What is the history of opposition to high tech weapons? What is the reasoning here and why is it wrong?

·    What's the biggest danger from AI weapons?

·    What is the difference between autonomous and semi-autonomous weapons? Can we get by without using totally autonomous weapons?

"Marks makes a lucid and compelling case that we have a moral obligation to develop lethal AI," said Jay Richards, philosopher and author, The Human Ad-vantage: The Future of American Work in an Age of Smart Machines. "He also reminds us that moral questions apply, not to the tools that we use to protect ourselves, but to how we use them when war becomes a necessity."

Marks provocatively argues that the development of lethal AI is not only appropriate in today's society; it is unavoidable if America wants to survive and thrive into the future.

"I am an outlier in the sense I believe that AI will never be creative nor have understanding," said Marks. "Like fire and electricity, AI is neither good nor bad. Those writing AI code and using AI systems are solely responsible for the morality and the ethics of use."

About the Author: Dr. Marks directs the Walter Bradley Center for Natural and Artificial Intelligence at Discovery Institute, and he is a Distinguished Professor of Electrical and Computer Engineering at Baylor University. Marks also heads up the Center's daily news website, Mind Matters News and hosts the Mind Matters Podcast.

No comments: