Robots can be perfect in making logical calculations to attain pre-determined goals. But they’re fully programmable, predictable, and their own will -despite appearances- is zero, like the model of the optimum man, according to industrial era norms. They don’t try to attain goals of their own.
And if a robot deviates from it’s pre-determined (by it’s programmer’s) course, it is considered faulty. It can no longer be of good service to it’s programmers.
But what if it’s programmers are not of good service to others?