Where Google’s ethics on “AI weapons” falter

Jul 05, 2018, 8:01 AM EDT
(Source: totallyawesome_me/flickr)
(Source: totallyawesome_me/flickr)

Last month, Google was swift to counter the backlash after reports of its collaboration with the U.S. military for an AI project made the headlines. The tech giant released an AI ethics memo explicitly stating that Google's AI scientists won’t work on military projects, but that’s hardly a solution to an existing problem of “war by algorithm.”

The U.S. drone strike on a family in Yemen in March is a glaring example how a nexus of intelligence agencies and commercial tech giants relies on “metadata” to identify their targets and eliminate them, a practice bordering on blatant violation of any ethical code, if not human rights abuse straightway, reports Wired.

The tech world needs philosophers to plug the loopholes in their ethics code on artificial intelligence and weapons, notes The Washington Post. Perfectly codifying ethical rules continues to be a challenge unless we can draw the line if a technology injures or prevents injuries.