Scientists at Brown University’s Humanity-Centered Robotics Initiative are attempting to create a robot that can learn moral behavior from the people around it. Like a child and its parents, the robot then would be taught morality and behavior by the people looking after it. Once beyond the basics, robots could even crowd-source their ethical education. When two principles it learns come into conflict, the robot could seek guidance and feedback from those it knows. But what happens if the robot falls in with the wrong crowd?