ATLANTA — What could possibly go wrong?

That was a question that some of the world’s leading roboticists faced at a technical meeting in October, when they were asked to consider what the science-fiction writer Isaac Asimov anticipated a half-century ago: the need to design ethical behavior into robots.

A lot has changed since then. Generally, we have moved from the industrial era of caged robots toward a time when robots will increasingly wander freely among us. On the military front, we now have “brilliant” weapons like self-navigating cruise missiles, pilotless drones and even Humvee-mounted, tele-operated M16 rifles.

Advocates in the Pentagon make the case that these robotic systems keep troops out of harm’s way, and are more effective killing machines. Some even argue that robotic systems have the potential to wage war more ethically — which, of course, sounds like an oxymoron— than human soldiers do. Proponents suggest that machines can kill with less collateral damage, and are less likely to commit war crimes.

All of which make questions about robots and ethics more than hypothetical for roboticists and policy makers alike.